00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 1717 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 2978 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.137 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.138 The recommended git tool is: git 00:00:00.138 using credential 00000000-0000-0000-0000-000000000002 00:00:00.140 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.184 Fetching changes from the remote Git repository 00:00:00.186 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.234 Using shallow fetch with depth 1 00:00:00.234 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.234 > git --version # timeout=10 00:00:00.259 > git --version # 'git version 2.39.2' 00:00:00.259 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.260 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.260 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.680 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.692 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.704 Checking out Revision 9a23290da272374f14acecb1f0954a7f78afc3cb (FETCH_HEAD) 00:00:05.704 > git config core.sparsecheckout # timeout=10 00:00:05.714 > git read-tree -mu HEAD # timeout=10 00:00:05.731 > git checkout -f 9a23290da272374f14acecb1f0954a7f78afc3cb # timeout=5 00:00:05.747 Commit message: "jenkins/perf: add artifacts cleanup for spdk files" 00:00:05.747 > git rev-list --no-walk 9a23290da272374f14acecb1f0954a7f78afc3cb # timeout=10 00:00:05.869 [Pipeline] Start of Pipeline 00:00:05.880 [Pipeline] library 00:00:05.882 Loading library shm_lib@master 00:00:05.882 Library shm_lib@master is cached. Copying from home. 00:00:05.901 [Pipeline] node 00:00:20.904 Still waiting to schedule task 00:00:20.904 ‘CYP10’ is offline 00:00:20.905 ‘CYP11’ is offline 00:00:20.905 ‘CYP12’ is offline 00:00:20.905 ‘CYP13’ is offline 00:00:20.905 ‘CYP7’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.905 ‘CYP8’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.905 ‘CYP9’ is offline 00:00:20.905 ‘FCP03’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.905 ‘FCP04’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.905 ‘FCP07’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.905 ‘FCP08’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.905 ‘FCP09’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.905 ‘FCP10’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.905 ‘FCP11’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.905 ‘FCP12’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.905 ‘GP10’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.905 ‘GP13’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.905 ‘GP15’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.905 ‘GP16’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.905 ‘GP18’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.905 ‘GP19’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘GP1’ is offline 00:00:20.906 ‘GP20’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘GP21’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘GP22’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘GP24’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘GP2’ is offline 00:00:20.906 ‘GP4’ is offline 00:00:20.906 ‘GP5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘ImageBuilder1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘Jenkins’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘ME1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘ME2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘ME3’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘PE5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘SM10’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘SM11’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘SM1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘SM28’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘SM29’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘SM2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘SM30’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘SM31’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘SM32’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘SM33’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘SM34’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘SM35’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘SM5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘SM6’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘SM7’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘SM8’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘VM-host-PE1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘VM-host-PE2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘VM-host-PE3’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘VM-host-PE4’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘VM-host-SM0’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘VM-host-SM18’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘VM-host-WFP25’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.906 ‘WCP0’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.907 ‘WFP21’ is offline 00:00:20.907 ‘WFP2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.907 ‘WFP36’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.907 ‘WFP49’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.907 ‘WFP4’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.907 ‘WFP52’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.907 ‘WFP6’ is offline 00:00:20.907 ‘ipxe-staging’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.907 ‘prc_bsc_waikikibeach64’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.907 ‘spdk-pxe-01’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:20.907 ‘spdk-pxe-02’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:03:24.725 Running on WFP16 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:24.727 [Pipeline] { 00:03:24.739 [Pipeline] catchError 00:03:24.741 [Pipeline] { 00:03:24.758 [Pipeline] wrap 00:03:24.770 [Pipeline] { 00:03:24.779 [Pipeline] stage 00:03:24.781 [Pipeline] { (Prologue) 00:03:24.982 [Pipeline] sh 00:03:25.260 + logger -p user.info -t JENKINS-CI 00:03:25.280 [Pipeline] echo 00:03:25.281 Node: WFP16 00:03:25.291 [Pipeline] sh 00:03:25.589 [Pipeline] setCustomBuildProperty 00:03:25.601 [Pipeline] echo 00:03:25.603 Cleanup processes 00:03:25.608 [Pipeline] sh 00:03:25.890 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:25.890 3170992 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:25.903 [Pipeline] sh 00:03:26.184 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:26.184 ++ grep -v 'sudo pgrep' 00:03:26.184 ++ awk '{print $1}' 00:03:26.184 + sudo kill -9 00:03:26.184 + true 00:03:26.199 [Pipeline] cleanWs 00:03:26.209 [WS-CLEANUP] Deleting project workspace... 00:03:26.209 [WS-CLEANUP] Deferred wipeout is used... 00:03:26.215 [WS-CLEANUP] done 00:03:26.219 [Pipeline] setCustomBuildProperty 00:03:26.234 [Pipeline] sh 00:03:26.511 + sudo git config --global --replace-all safe.directory '*' 00:03:26.584 [Pipeline] nodesByLabel 00:03:26.585 Found a total of 1 nodes with the 'sorcerer' label 00:03:26.597 [Pipeline] httpRequest 00:03:26.602 HttpMethod: GET 00:03:26.603 URL: http://10.211.164.101/packages/jbp_9a23290da272374f14acecb1f0954a7f78afc3cb.tar.gz 00:03:26.606 Sending request to url: http://10.211.164.101/packages/jbp_9a23290da272374f14acecb1f0954a7f78afc3cb.tar.gz 00:03:26.608 Response Code: HTTP/1.1 200 OK 00:03:26.608 Success: Status code 200 is in the accepted range: 200,404 00:03:26.609 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9a23290da272374f14acecb1f0954a7f78afc3cb.tar.gz 00:03:26.746 [Pipeline] sh 00:03:27.025 + tar --no-same-owner -xf jbp_9a23290da272374f14acecb1f0954a7f78afc3cb.tar.gz 00:03:27.046 [Pipeline] httpRequest 00:03:27.050 HttpMethod: GET 00:03:27.051 URL: http://10.211.164.101/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:03:27.051 Sending request to url: http://10.211.164.101/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:03:27.055 Response Code: HTTP/1.1 200 OK 00:03:27.055 Success: Status code 200 is in the accepted range: 200,404 00:03:27.056 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:03:29.213 [Pipeline] sh 00:03:29.502 + tar --no-same-owner -xf spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:03:32.046 [Pipeline] sh 00:03:32.326 + git -C spdk log --oneline -n5 00:03:32.326 36faa8c31 bdev/nvme: Fix the case that namespace was removed during reset 00:03:32.326 e2cb5a5ee bdev/nvme: Factor out nvme_ns active/inactive check into a helper function 00:03:32.326 4b134b4ab bdev/nvme: Delay callbacks when the next operation is a failover 00:03:32.326 d2ea4ecb1 llvm/vfio: Suppress checking leaks for `spdk_nvme_ctrlr_alloc_io_qpair` 00:03:32.326 3b33f4333 test/nvme/cuse: Fix typo 00:03:32.337 [Pipeline] } 00:03:32.352 [Pipeline] // stage 00:03:32.359 [Pipeline] stage 00:03:32.361 [Pipeline] { (Prepare) 00:03:32.376 [Pipeline] writeFile 00:03:32.391 [Pipeline] sh 00:03:32.671 + logger -p user.info -t JENKINS-CI 00:03:32.683 [Pipeline] sh 00:03:32.963 + logger -p user.info -t JENKINS-CI 00:03:32.974 [Pipeline] sh 00:03:33.256 + cat autorun-spdk.conf 00:03:33.256 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:33.256 SPDK_TEST_NVMF=1 00:03:33.256 SPDK_TEST_NVME_CLI=1 00:03:33.256 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:33.256 SPDK_TEST_NVMF_NICS=e810 00:03:33.256 SPDK_RUN_UBSAN=1 00:03:33.256 NET_TYPE=phy 00:03:33.265 RUN_NIGHTLY=1 00:03:33.272 [Pipeline] readFile 00:03:33.293 [Pipeline] withEnv 00:03:33.294 [Pipeline] { 00:03:33.305 [Pipeline] sh 00:03:33.584 + set -ex 00:03:33.584 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:33.584 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:33.584 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:33.584 ++ SPDK_TEST_NVMF=1 00:03:33.584 ++ SPDK_TEST_NVME_CLI=1 00:03:33.584 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:33.585 ++ SPDK_TEST_NVMF_NICS=e810 00:03:33.585 ++ SPDK_RUN_UBSAN=1 00:03:33.585 ++ NET_TYPE=phy 00:03:33.585 ++ RUN_NIGHTLY=1 00:03:33.585 + case $SPDK_TEST_NVMF_NICS in 00:03:33.585 + DRIVERS=ice 00:03:33.585 + [[ tcp == \r\d\m\a ]] 00:03:33.585 + [[ -n ice ]] 00:03:33.585 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:33.585 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:33.585 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:03:33.585 rmmod: ERROR: Module irdma is not currently loaded 00:03:33.585 rmmod: ERROR: Module i40iw is not currently loaded 00:03:33.585 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:33.585 + true 00:03:33.585 + for D in $DRIVERS 00:03:33.585 + sudo modprobe ice 00:03:33.585 + exit 0 00:03:33.594 [Pipeline] } 00:03:33.612 [Pipeline] // withEnv 00:03:33.616 [Pipeline] } 00:03:33.636 [Pipeline] // stage 00:03:33.648 [Pipeline] catchError 00:03:33.651 [Pipeline] { 00:03:33.667 [Pipeline] timeout 00:03:33.668 Timeout set to expire in 40 min 00:03:33.669 [Pipeline] { 00:03:33.686 [Pipeline] stage 00:03:33.688 [Pipeline] { (Tests) 00:03:33.707 [Pipeline] sh 00:03:33.988 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:33.988 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:33.988 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:33.988 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:33.988 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:33.988 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:33.988 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:33.988 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:33.988 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:33.988 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:33.988 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:33.988 + source /etc/os-release 00:03:33.988 ++ NAME='Fedora Linux' 00:03:33.988 ++ VERSION='38 (Cloud Edition)' 00:03:33.988 ++ ID=fedora 00:03:33.988 ++ VERSION_ID=38 00:03:33.988 ++ VERSION_CODENAME= 00:03:33.988 ++ PLATFORM_ID=platform:f38 00:03:33.988 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:03:33.988 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:33.988 ++ LOGO=fedora-logo-icon 00:03:33.988 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:03:33.988 ++ HOME_URL=https://fedoraproject.org/ 00:03:33.988 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:03:33.988 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:33.988 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:33.988 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:33.988 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:03:33.988 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:33.988 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:03:33.988 ++ SUPPORT_END=2024-05-14 00:03:33.988 ++ VARIANT='Cloud Edition' 00:03:33.988 ++ VARIANT_ID=cloud 00:03:33.988 + uname -a 00:03:33.988 Linux spdk-wfp-16 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:03:33.988 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:36.535 Hugepages 00:03:36.535 node hugesize free / total 00:03:36.535 node0 1048576kB 0 / 0 00:03:36.535 node0 2048kB 0 / 0 00:03:36.535 node1 1048576kB 0 / 0 00:03:36.535 node1 2048kB 0 / 0 00:03:36.535 00:03:36.535 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:36.535 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:36.535 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:36.535 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:36.535 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:36.535 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:36.535 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:36.535 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:36.535 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:36.535 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:36.535 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:36.535 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:36.535 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:36.535 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:36.535 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:36.535 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:36.535 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:36.535 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:36.535 + rm -f /tmp/spdk-ld-path 00:03:36.535 + source autorun-spdk.conf 00:03:36.535 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:36.535 ++ SPDK_TEST_NVMF=1 00:03:36.535 ++ SPDK_TEST_NVME_CLI=1 00:03:36.535 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:36.535 ++ SPDK_TEST_NVMF_NICS=e810 00:03:36.535 ++ SPDK_RUN_UBSAN=1 00:03:36.535 ++ NET_TYPE=phy 00:03:36.535 ++ RUN_NIGHTLY=1 00:03:36.535 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:36.535 + [[ -n '' ]] 00:03:36.535 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:36.535 + for M in /var/spdk/build-*-manifest.txt 00:03:36.535 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:36.535 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:36.535 + for M in /var/spdk/build-*-manifest.txt 00:03:36.535 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:36.535 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:36.535 ++ uname 00:03:36.535 + [[ Linux == \L\i\n\u\x ]] 00:03:36.535 + sudo dmesg -T 00:03:36.535 + sudo dmesg --clear 00:03:36.798 + dmesg_pid=3172257 00:03:36.798 + [[ Fedora Linux == FreeBSD ]] 00:03:36.798 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:36.798 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:36.798 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:36.798 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:03:36.798 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:03:36.798 + [[ -x /usr/src/fio-static/fio ]] 00:03:36.798 + export FIO_BIN=/usr/src/fio-static/fio 00:03:36.798 + FIO_BIN=/usr/src/fio-static/fio 00:03:36.798 + sudo dmesg -Tw 00:03:36.798 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:36.798 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:36.798 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:36.798 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:36.798 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:36.798 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:36.798 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:36.798 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:36.798 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:36.798 Test configuration: 00:03:36.798 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:36.798 SPDK_TEST_NVMF=1 00:03:36.798 SPDK_TEST_NVME_CLI=1 00:03:36.798 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:36.798 SPDK_TEST_NVMF_NICS=e810 00:03:36.798 SPDK_RUN_UBSAN=1 00:03:36.798 NET_TYPE=phy 00:03:36.798 RUN_NIGHTLY=1 10:00:09 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:36.798 10:00:09 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:36.798 10:00:09 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:36.798 10:00:09 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:36.798 10:00:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:36.798 10:00:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:36.798 10:00:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:36.798 10:00:09 -- paths/export.sh@5 -- $ export PATH 00:03:36.798 10:00:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:36.798 10:00:09 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:36.798 10:00:09 -- common/autobuild_common.sh@435 -- $ date +%s 00:03:36.798 10:00:09 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713340809.XXXXXX 00:03:36.798 10:00:09 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713340809.phTanC 00:03:36.798 10:00:09 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:03:36.798 10:00:09 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:03:36.798 10:00:09 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:36.798 10:00:09 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:36.798 10:00:09 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:36.798 10:00:09 -- common/autobuild_common.sh@451 -- $ get_config_params 00:03:36.798 10:00:09 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:03:36.798 10:00:09 -- common/autotest_common.sh@10 -- $ set +x 00:03:36.798 10:00:10 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:03:36.798 10:00:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:36.798 10:00:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:36.798 10:00:10 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:36.798 10:00:10 -- spdk/autobuild.sh@16 -- $ date -u 00:03:36.798 Wed Apr 17 08:00:10 AM UTC 2024 00:03:36.798 10:00:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:36.798 LTS-24-g36faa8c31 00:03:36.798 10:00:10 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:36.798 10:00:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:36.798 10:00:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:36.798 10:00:10 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:03:36.798 10:00:10 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:03:36.798 10:00:10 -- common/autotest_common.sh@10 -- $ set +x 00:03:36.798 ************************************ 00:03:36.798 START TEST ubsan 00:03:36.798 ************************************ 00:03:36.798 10:00:10 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:03:36.798 using ubsan 00:03:36.798 00:03:36.798 real 0m0.000s 00:03:36.798 user 0m0.000s 00:03:36.798 sys 0m0.000s 00:03:36.798 10:00:10 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:36.798 10:00:10 -- common/autotest_common.sh@10 -- $ set +x 00:03:36.798 ************************************ 00:03:36.798 END TEST ubsan 00:03:36.798 ************************************ 00:03:36.798 10:00:10 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:36.798 10:00:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:36.798 10:00:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:36.798 10:00:10 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:36.798 10:00:10 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:36.798 10:00:10 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:36.798 10:00:10 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:36.798 10:00:10 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:36.798 10:00:10 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:03:37.058 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:37.058 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:37.316 Using 'verbs' RDMA provider 00:03:50.096 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:04:02.324 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:04:02.324 Creating mk/config.mk...done. 00:04:02.324 Creating mk/cc.flags.mk...done. 00:04:02.324 Type 'make' to build. 00:04:02.324 10:00:35 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:04:02.324 10:00:35 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:04:02.324 10:00:35 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:04:02.324 10:00:35 -- common/autotest_common.sh@10 -- $ set +x 00:04:02.324 ************************************ 00:04:02.324 START TEST make 00:04:02.324 ************************************ 00:04:02.324 10:00:35 -- common/autotest_common.sh@1104 -- $ make -j112 00:04:02.324 make[1]: Nothing to be done for 'all'. 00:04:10.471 The Meson build system 00:04:10.471 Version: 1.3.1 00:04:10.471 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:04:10.471 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:04:10.472 Build type: native build 00:04:10.472 Program cat found: YES (/usr/bin/cat) 00:04:10.472 Project name: DPDK 00:04:10.472 Project version: 23.11.0 00:04:10.472 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:04:10.472 C linker for the host machine: cc ld.bfd 2.39-16 00:04:10.472 Host machine cpu family: x86_64 00:04:10.472 Host machine cpu: x86_64 00:04:10.472 Message: ## Building in Developer Mode ## 00:04:10.472 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:10.472 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:04:10.472 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:10.472 Program python3 found: YES (/usr/bin/python3) 00:04:10.472 Program cat found: YES (/usr/bin/cat) 00:04:10.472 Compiler for C supports arguments -march=native: YES 00:04:10.472 Checking for size of "void *" : 8 00:04:10.472 Checking for size of "void *" : 8 (cached) 00:04:10.472 Library m found: YES 00:04:10.472 Library numa found: YES 00:04:10.472 Has header "numaif.h" : YES 00:04:10.472 Library fdt found: NO 00:04:10.472 Library execinfo found: NO 00:04:10.472 Has header "execinfo.h" : YES 00:04:10.472 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:04:10.472 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:10.472 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:10.472 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:10.472 Run-time dependency openssl found: YES 3.0.9 00:04:10.472 Run-time dependency libpcap found: YES 1.10.4 00:04:10.472 Has header "pcap.h" with dependency libpcap: YES 00:04:10.472 Compiler for C supports arguments -Wcast-qual: YES 00:04:10.472 Compiler for C supports arguments -Wdeprecated: YES 00:04:10.472 Compiler for C supports arguments -Wformat: YES 00:04:10.472 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:10.472 Compiler for C supports arguments -Wformat-security: NO 00:04:10.472 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:10.472 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:10.472 Compiler for C supports arguments -Wnested-externs: YES 00:04:10.472 Compiler for C supports arguments -Wold-style-definition: YES 00:04:10.472 Compiler for C supports arguments -Wpointer-arith: YES 00:04:10.472 Compiler for C supports arguments -Wsign-compare: YES 00:04:10.472 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:10.472 Compiler for C supports arguments -Wundef: YES 00:04:10.472 Compiler for C supports arguments -Wwrite-strings: YES 00:04:10.472 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:10.472 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:10.472 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:10.472 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:10.472 Program objdump found: YES (/usr/bin/objdump) 00:04:10.472 Compiler for C supports arguments -mavx512f: YES 00:04:10.472 Checking if "AVX512 checking" compiles: YES 00:04:10.472 Fetching value of define "__SSE4_2__" : 1 00:04:10.472 Fetching value of define "__AES__" : 1 00:04:10.472 Fetching value of define "__AVX__" : 1 00:04:10.472 Fetching value of define "__AVX2__" : 1 00:04:10.472 Fetching value of define "__AVX512BW__" : 1 00:04:10.472 Fetching value of define "__AVX512CD__" : 1 00:04:10.472 Fetching value of define "__AVX512DQ__" : 1 00:04:10.472 Fetching value of define "__AVX512F__" : 1 00:04:10.472 Fetching value of define "__AVX512VL__" : 1 00:04:10.472 Fetching value of define "__PCLMUL__" : 1 00:04:10.472 Fetching value of define "__RDRND__" : 1 00:04:10.472 Fetching value of define "__RDSEED__" : 1 00:04:10.472 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:10.472 Fetching value of define "__znver1__" : (undefined) 00:04:10.472 Fetching value of define "__znver2__" : (undefined) 00:04:10.472 Fetching value of define "__znver3__" : (undefined) 00:04:10.472 Fetching value of define "__znver4__" : (undefined) 00:04:10.472 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:10.472 Message: lib/log: Defining dependency "log" 00:04:10.472 Message: lib/kvargs: Defining dependency "kvargs" 00:04:10.472 Message: lib/telemetry: Defining dependency "telemetry" 00:04:10.472 Checking for function "getentropy" : NO 00:04:10.472 Message: lib/eal: Defining dependency "eal" 00:04:10.472 Message: lib/ring: Defining dependency "ring" 00:04:10.472 Message: lib/rcu: Defining dependency "rcu" 00:04:10.472 Message: lib/mempool: Defining dependency "mempool" 00:04:10.472 Message: lib/mbuf: Defining dependency "mbuf" 00:04:10.472 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:10.472 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:10.472 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:10.472 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:10.472 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:10.472 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:04:10.472 Compiler for C supports arguments -mpclmul: YES 00:04:10.472 Compiler for C supports arguments -maes: YES 00:04:10.472 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:10.472 Compiler for C supports arguments -mavx512bw: YES 00:04:10.472 Compiler for C supports arguments -mavx512dq: YES 00:04:10.472 Compiler for C supports arguments -mavx512vl: YES 00:04:10.472 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:10.472 Compiler for C supports arguments -mavx2: YES 00:04:10.472 Compiler for C supports arguments -mavx: YES 00:04:10.472 Message: lib/net: Defining dependency "net" 00:04:10.472 Message: lib/meter: Defining dependency "meter" 00:04:10.472 Message: lib/ethdev: Defining dependency "ethdev" 00:04:10.472 Message: lib/pci: Defining dependency "pci" 00:04:10.472 Message: lib/cmdline: Defining dependency "cmdline" 00:04:10.472 Message: lib/hash: Defining dependency "hash" 00:04:10.472 Message: lib/timer: Defining dependency "timer" 00:04:10.472 Message: lib/compressdev: Defining dependency "compressdev" 00:04:10.472 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:10.472 Message: lib/dmadev: Defining dependency "dmadev" 00:04:10.472 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:10.472 Message: lib/power: Defining dependency "power" 00:04:10.472 Message: lib/reorder: Defining dependency "reorder" 00:04:10.472 Message: lib/security: Defining dependency "security" 00:04:10.472 Has header "linux/userfaultfd.h" : YES 00:04:10.472 Has header "linux/vduse.h" : YES 00:04:10.472 Message: lib/vhost: Defining dependency "vhost" 00:04:10.472 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:10.472 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:10.472 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:10.472 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:10.472 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:10.472 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:10.472 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:10.472 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:10.472 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:10.472 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:10.472 Program doxygen found: YES (/usr/bin/doxygen) 00:04:10.472 Configuring doxy-api-html.conf using configuration 00:04:10.472 Configuring doxy-api-man.conf using configuration 00:04:10.472 Program mandb found: YES (/usr/bin/mandb) 00:04:10.472 Program sphinx-build found: NO 00:04:10.472 Configuring rte_build_config.h using configuration 00:04:10.472 Message: 00:04:10.472 ================= 00:04:10.472 Applications Enabled 00:04:10.472 ================= 00:04:10.472 00:04:10.472 apps: 00:04:10.472 00:04:10.472 00:04:10.472 Message: 00:04:10.472 ================= 00:04:10.472 Libraries Enabled 00:04:10.472 ================= 00:04:10.472 00:04:10.472 libs: 00:04:10.472 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:10.472 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:10.472 cryptodev, dmadev, power, reorder, security, vhost, 00:04:10.472 00:04:10.472 Message: 00:04:10.472 =============== 00:04:10.472 Drivers Enabled 00:04:10.472 =============== 00:04:10.472 00:04:10.472 common: 00:04:10.472 00:04:10.472 bus: 00:04:10.472 pci, vdev, 00:04:10.472 mempool: 00:04:10.472 ring, 00:04:10.472 dma: 00:04:10.472 00:04:10.472 net: 00:04:10.472 00:04:10.472 crypto: 00:04:10.472 00:04:10.472 compress: 00:04:10.472 00:04:10.472 vdpa: 00:04:10.472 00:04:10.472 00:04:10.472 Message: 00:04:10.472 ================= 00:04:10.472 Content Skipped 00:04:10.472 ================= 00:04:10.472 00:04:10.472 apps: 00:04:10.472 dumpcap: explicitly disabled via build config 00:04:10.472 graph: explicitly disabled via build config 00:04:10.472 pdump: explicitly disabled via build config 00:04:10.472 proc-info: explicitly disabled via build config 00:04:10.472 test-acl: explicitly disabled via build config 00:04:10.472 test-bbdev: explicitly disabled via build config 00:04:10.472 test-cmdline: explicitly disabled via build config 00:04:10.472 test-compress-perf: explicitly disabled via build config 00:04:10.472 test-crypto-perf: explicitly disabled via build config 00:04:10.472 test-dma-perf: explicitly disabled via build config 00:04:10.472 test-eventdev: explicitly disabled via build config 00:04:10.472 test-fib: explicitly disabled via build config 00:04:10.472 test-flow-perf: explicitly disabled via build config 00:04:10.472 test-gpudev: explicitly disabled via build config 00:04:10.472 test-mldev: explicitly disabled via build config 00:04:10.472 test-pipeline: explicitly disabled via build config 00:04:10.472 test-pmd: explicitly disabled via build config 00:04:10.472 test-regex: explicitly disabled via build config 00:04:10.472 test-sad: explicitly disabled via build config 00:04:10.472 test-security-perf: explicitly disabled via build config 00:04:10.472 00:04:10.473 libs: 00:04:10.473 metrics: explicitly disabled via build config 00:04:10.473 acl: explicitly disabled via build config 00:04:10.473 bbdev: explicitly disabled via build config 00:04:10.473 bitratestats: explicitly disabled via build config 00:04:10.473 bpf: explicitly disabled via build config 00:04:10.473 cfgfile: explicitly disabled via build config 00:04:10.473 distributor: explicitly disabled via build config 00:04:10.473 efd: explicitly disabled via build config 00:04:10.473 eventdev: explicitly disabled via build config 00:04:10.473 dispatcher: explicitly disabled via build config 00:04:10.473 gpudev: explicitly disabled via build config 00:04:10.473 gro: explicitly disabled via build config 00:04:10.473 gso: explicitly disabled via build config 00:04:10.473 ip_frag: explicitly disabled via build config 00:04:10.473 jobstats: explicitly disabled via build config 00:04:10.473 latencystats: explicitly disabled via build config 00:04:10.473 lpm: explicitly disabled via build config 00:04:10.473 member: explicitly disabled via build config 00:04:10.473 pcapng: explicitly disabled via build config 00:04:10.473 rawdev: explicitly disabled via build config 00:04:10.473 regexdev: explicitly disabled via build config 00:04:10.473 mldev: explicitly disabled via build config 00:04:10.473 rib: explicitly disabled via build config 00:04:10.473 sched: explicitly disabled via build config 00:04:10.473 stack: explicitly disabled via build config 00:04:10.473 ipsec: explicitly disabled via build config 00:04:10.473 pdcp: explicitly disabled via build config 00:04:10.473 fib: explicitly disabled via build config 00:04:10.473 port: explicitly disabled via build config 00:04:10.473 pdump: explicitly disabled via build config 00:04:10.473 table: explicitly disabled via build config 00:04:10.473 pipeline: explicitly disabled via build config 00:04:10.473 graph: explicitly disabled via build config 00:04:10.473 node: explicitly disabled via build config 00:04:10.473 00:04:10.473 drivers: 00:04:10.473 common/cpt: not in enabled drivers build config 00:04:10.473 common/dpaax: not in enabled drivers build config 00:04:10.473 common/iavf: not in enabled drivers build config 00:04:10.473 common/idpf: not in enabled drivers build config 00:04:10.473 common/mvep: not in enabled drivers build config 00:04:10.473 common/octeontx: not in enabled drivers build config 00:04:10.473 bus/auxiliary: not in enabled drivers build config 00:04:10.473 bus/cdx: not in enabled drivers build config 00:04:10.473 bus/dpaa: not in enabled drivers build config 00:04:10.473 bus/fslmc: not in enabled drivers build config 00:04:10.473 bus/ifpga: not in enabled drivers build config 00:04:10.473 bus/platform: not in enabled drivers build config 00:04:10.473 bus/vmbus: not in enabled drivers build config 00:04:10.473 common/cnxk: not in enabled drivers build config 00:04:10.473 common/mlx5: not in enabled drivers build config 00:04:10.473 common/nfp: not in enabled drivers build config 00:04:10.473 common/qat: not in enabled drivers build config 00:04:10.473 common/sfc_efx: not in enabled drivers build config 00:04:10.473 mempool/bucket: not in enabled drivers build config 00:04:10.473 mempool/cnxk: not in enabled drivers build config 00:04:10.473 mempool/dpaa: not in enabled drivers build config 00:04:10.473 mempool/dpaa2: not in enabled drivers build config 00:04:10.473 mempool/octeontx: not in enabled drivers build config 00:04:10.473 mempool/stack: not in enabled drivers build config 00:04:10.473 dma/cnxk: not in enabled drivers build config 00:04:10.473 dma/dpaa: not in enabled drivers build config 00:04:10.473 dma/dpaa2: not in enabled drivers build config 00:04:10.473 dma/hisilicon: not in enabled drivers build config 00:04:10.473 dma/idxd: not in enabled drivers build config 00:04:10.473 dma/ioat: not in enabled drivers build config 00:04:10.473 dma/skeleton: not in enabled drivers build config 00:04:10.473 net/af_packet: not in enabled drivers build config 00:04:10.473 net/af_xdp: not in enabled drivers build config 00:04:10.473 net/ark: not in enabled drivers build config 00:04:10.473 net/atlantic: not in enabled drivers build config 00:04:10.473 net/avp: not in enabled drivers build config 00:04:10.473 net/axgbe: not in enabled drivers build config 00:04:10.473 net/bnx2x: not in enabled drivers build config 00:04:10.473 net/bnxt: not in enabled drivers build config 00:04:10.473 net/bonding: not in enabled drivers build config 00:04:10.473 net/cnxk: not in enabled drivers build config 00:04:10.473 net/cpfl: not in enabled drivers build config 00:04:10.473 net/cxgbe: not in enabled drivers build config 00:04:10.473 net/dpaa: not in enabled drivers build config 00:04:10.473 net/dpaa2: not in enabled drivers build config 00:04:10.473 net/e1000: not in enabled drivers build config 00:04:10.473 net/ena: not in enabled drivers build config 00:04:10.473 net/enetc: not in enabled drivers build config 00:04:10.473 net/enetfec: not in enabled drivers build config 00:04:10.473 net/enic: not in enabled drivers build config 00:04:10.473 net/failsafe: not in enabled drivers build config 00:04:10.473 net/fm10k: not in enabled drivers build config 00:04:10.473 net/gve: not in enabled drivers build config 00:04:10.473 net/hinic: not in enabled drivers build config 00:04:10.473 net/hns3: not in enabled drivers build config 00:04:10.473 net/i40e: not in enabled drivers build config 00:04:10.473 net/iavf: not in enabled drivers build config 00:04:10.473 net/ice: not in enabled drivers build config 00:04:10.473 net/idpf: not in enabled drivers build config 00:04:10.473 net/igc: not in enabled drivers build config 00:04:10.473 net/ionic: not in enabled drivers build config 00:04:10.473 net/ipn3ke: not in enabled drivers build config 00:04:10.473 net/ixgbe: not in enabled drivers build config 00:04:10.473 net/mana: not in enabled drivers build config 00:04:10.473 net/memif: not in enabled drivers build config 00:04:10.473 net/mlx4: not in enabled drivers build config 00:04:10.473 net/mlx5: not in enabled drivers build config 00:04:10.473 net/mvneta: not in enabled drivers build config 00:04:10.473 net/mvpp2: not in enabled drivers build config 00:04:10.473 net/netvsc: not in enabled drivers build config 00:04:10.473 net/nfb: not in enabled drivers build config 00:04:10.473 net/nfp: not in enabled drivers build config 00:04:10.473 net/ngbe: not in enabled drivers build config 00:04:10.473 net/null: not in enabled drivers build config 00:04:10.473 net/octeontx: not in enabled drivers build config 00:04:10.473 net/octeon_ep: not in enabled drivers build config 00:04:10.473 net/pcap: not in enabled drivers build config 00:04:10.473 net/pfe: not in enabled drivers build config 00:04:10.473 net/qede: not in enabled drivers build config 00:04:10.473 net/ring: not in enabled drivers build config 00:04:10.473 net/sfc: not in enabled drivers build config 00:04:10.473 net/softnic: not in enabled drivers build config 00:04:10.473 net/tap: not in enabled drivers build config 00:04:10.473 net/thunderx: not in enabled drivers build config 00:04:10.473 net/txgbe: not in enabled drivers build config 00:04:10.473 net/vdev_netvsc: not in enabled drivers build config 00:04:10.473 net/vhost: not in enabled drivers build config 00:04:10.473 net/virtio: not in enabled drivers build config 00:04:10.473 net/vmxnet3: not in enabled drivers build config 00:04:10.473 raw/*: missing internal dependency, "rawdev" 00:04:10.473 crypto/armv8: not in enabled drivers build config 00:04:10.473 crypto/bcmfs: not in enabled drivers build config 00:04:10.473 crypto/caam_jr: not in enabled drivers build config 00:04:10.473 crypto/ccp: not in enabled drivers build config 00:04:10.473 crypto/cnxk: not in enabled drivers build config 00:04:10.473 crypto/dpaa_sec: not in enabled drivers build config 00:04:10.473 crypto/dpaa2_sec: not in enabled drivers build config 00:04:10.473 crypto/ipsec_mb: not in enabled drivers build config 00:04:10.473 crypto/mlx5: not in enabled drivers build config 00:04:10.473 crypto/mvsam: not in enabled drivers build config 00:04:10.473 crypto/nitrox: not in enabled drivers build config 00:04:10.473 crypto/null: not in enabled drivers build config 00:04:10.473 crypto/octeontx: not in enabled drivers build config 00:04:10.473 crypto/openssl: not in enabled drivers build config 00:04:10.473 crypto/scheduler: not in enabled drivers build config 00:04:10.473 crypto/uadk: not in enabled drivers build config 00:04:10.473 crypto/virtio: not in enabled drivers build config 00:04:10.473 compress/isal: not in enabled drivers build config 00:04:10.473 compress/mlx5: not in enabled drivers build config 00:04:10.473 compress/octeontx: not in enabled drivers build config 00:04:10.473 compress/zlib: not in enabled drivers build config 00:04:10.473 regex/*: missing internal dependency, "regexdev" 00:04:10.473 ml/*: missing internal dependency, "mldev" 00:04:10.473 vdpa/ifc: not in enabled drivers build config 00:04:10.473 vdpa/mlx5: not in enabled drivers build config 00:04:10.473 vdpa/nfp: not in enabled drivers build config 00:04:10.473 vdpa/sfc: not in enabled drivers build config 00:04:10.473 event/*: missing internal dependency, "eventdev" 00:04:10.473 baseband/*: missing internal dependency, "bbdev" 00:04:10.473 gpu/*: missing internal dependency, "gpudev" 00:04:10.473 00:04:10.473 00:04:10.736 Build targets in project: 85 00:04:10.736 00:04:10.736 DPDK 23.11.0 00:04:10.736 00:04:10.736 User defined options 00:04:10.736 buildtype : debug 00:04:10.736 default_library : shared 00:04:10.736 libdir : lib 00:04:10.736 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:10.736 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:04:10.736 c_link_args : 00:04:10.736 cpu_instruction_set: native 00:04:10.736 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:04:10.736 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:04:10.736 enable_docs : false 00:04:10.736 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:10.736 enable_kmods : false 00:04:10.736 tests : false 00:04:10.736 00:04:10.736 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:11.313 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:04:11.313 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:11.313 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:11.313 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:11.579 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:11.579 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:11.579 [6/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:11.579 [7/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:11.579 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:11.579 [9/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:11.579 [10/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:11.579 [11/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:11.579 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:11.579 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:11.579 [14/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:11.579 [15/265] Linking static target lib/librte_kvargs.a 00:04:11.579 [16/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:11.579 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:11.579 [18/265] Linking static target lib/librte_log.a 00:04:11.579 [19/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:11.579 [20/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:11.579 [21/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:11.579 [22/265] Linking static target lib/librte_pci.a 00:04:11.579 [23/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:11.579 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:11.579 [25/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:11.579 [26/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:11.579 [27/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:11.579 [28/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:11.579 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:11.579 [30/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:11.838 [31/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:11.838 [32/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:11.838 [33/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:11.838 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:11.838 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:11.838 [36/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:11.838 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:11.839 [38/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:11.839 [39/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:11.839 [40/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:12.101 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:12.101 [42/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:12.101 [43/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:12.101 [44/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.101 [45/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:12.101 [46/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.101 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:12.101 [48/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:12.101 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:12.101 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:12.101 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:12.101 [52/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:12.101 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:12.101 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:12.101 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:12.101 [56/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:12.101 [57/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:12.101 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:12.101 [59/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:12.101 [60/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:12.101 [61/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:12.101 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:12.101 [63/265] Linking static target lib/librte_meter.a 00:04:12.101 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:12.101 [65/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:12.101 [66/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:12.101 [67/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:12.101 [68/265] Linking static target lib/librte_ring.a 00:04:12.101 [69/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:12.101 [70/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:12.101 [71/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:12.101 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:12.101 [73/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:12.101 [74/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:12.101 [75/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:12.101 [76/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:12.101 [77/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:12.101 [78/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:12.101 [79/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:12.101 [80/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:12.101 [81/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:12.101 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:12.101 [83/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:12.101 [84/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:12.360 [85/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:12.360 [86/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:12.360 [87/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:12.360 [88/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:12.360 [89/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:12.360 [90/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:12.360 [91/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:12.360 [92/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:12.360 [93/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:12.360 [94/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:12.360 [95/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:12.360 [96/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:12.360 [97/265] Linking static target lib/librte_cmdline.a 00:04:12.360 [98/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:12.360 [99/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:12.360 [100/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:12.360 [101/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:12.360 [102/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:12.360 [103/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:12.360 [104/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:12.360 [105/265] Linking static target lib/librte_timer.a 00:04:12.360 [106/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:12.360 [107/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:12.360 [108/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:12.360 [109/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:12.360 [110/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:12.360 [111/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:12.360 [112/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:12.360 [113/265] Linking static target lib/librte_telemetry.a 00:04:12.360 [114/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:12.360 [115/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:12.360 [116/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:12.360 [117/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:12.360 [118/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:12.360 [119/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:12.360 [120/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:12.360 [121/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:12.360 [122/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:12.361 [123/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:12.361 [124/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:12.361 [125/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:12.361 [126/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:12.361 [127/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:12.361 [128/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:12.361 [129/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:12.361 [130/265] Linking static target lib/librte_net.a 00:04:12.361 [131/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:12.361 [132/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:12.361 [133/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:12.361 [134/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:12.361 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:12.361 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:12.361 [137/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:12.361 [138/265] Linking static target lib/librte_mempool.a 00:04:12.361 [139/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:12.361 [140/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:12.361 [141/265] Linking static target lib/librte_rcu.a 00:04:12.361 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:12.361 [143/265] Linking static target lib/librte_compressdev.a 00:04:12.361 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:12.361 [145/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:12.361 [146/265] Linking static target lib/librte_eal.a 00:04:12.361 [147/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:12.361 [148/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.361 [149/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:12.361 [150/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:12.361 [151/265] Linking static target lib/librte_dmadev.a 00:04:12.361 [152/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:12.361 [153/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:12.361 [154/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.361 [155/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.619 [156/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:12.619 [157/265] Linking target lib/librte_log.so.24.0 00:04:12.619 [158/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:12.619 [159/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:12.619 [160/265] Linking static target lib/librte_power.a 00:04:12.619 [161/265] Linking static target lib/librte_security.a 00:04:12.619 [162/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:12.619 [163/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:12.619 [164/265] Linking static target lib/librte_reorder.a 00:04:12.619 [165/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:12.619 [166/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:12.619 [167/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:12.619 [168/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:12.619 [169/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:12.619 [170/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:12.619 [171/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:12.619 [172/265] Linking static target lib/librte_mbuf.a 00:04:12.619 [173/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:12.619 [174/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:12.619 [175/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:12.619 [176/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:12.619 [177/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:04:12.619 [178/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:12.619 [179/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:12.619 [180/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:12.619 [181/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.619 [182/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:12.619 [183/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.619 [184/265] Linking static target lib/librte_hash.a 00:04:12.619 [185/265] Linking target lib/librte_kvargs.so.24.0 00:04:12.619 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:12.878 [187/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.878 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:12.878 [189/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:12.878 [190/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.878 [191/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:12.878 [192/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:12.878 [193/265] Linking target lib/librte_telemetry.so.24.0 00:04:12.878 [194/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:12.878 [195/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:12.878 [196/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:04:12.878 [197/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:12.878 [198/265] Linking static target drivers/librte_bus_vdev.a 00:04:12.878 [199/265] Linking static target lib/librte_cryptodev.a 00:04:12.878 [200/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:12.878 [201/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:12.878 [202/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.878 [203/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:12.878 [204/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:12.878 [205/265] Linking static target drivers/librte_bus_pci.a 00:04:12.878 [206/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:13.138 [207/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.138 [208/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:04:13.138 [209/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:13.138 [210/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:13.138 [211/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:13.138 [212/265] Linking static target drivers/librte_mempool_ring.a 00:04:13.138 [213/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.138 [214/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.138 [215/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.138 [216/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.396 [217/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:13.396 [218/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.396 [219/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.397 [220/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.397 [221/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:13.397 [222/265] Linking static target lib/librte_ethdev.a 00:04:13.397 [223/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.655 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.591 [225/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.159 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:15.159 [227/265] Linking static target lib/librte_vhost.a 00:04:17.062 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.256 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.639 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.639 [231/265] Linking target lib/librte_eal.so.24.0 00:04:22.639 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:04:22.639 [233/265] Linking target lib/librte_timer.so.24.0 00:04:22.639 [234/265] Linking target lib/librte_meter.so.24.0 00:04:22.639 [235/265] Linking target lib/librte_ring.so.24.0 00:04:22.639 [236/265] Linking target lib/librte_pci.so.24.0 00:04:22.639 [237/265] Linking target lib/librte_dmadev.so.24.0 00:04:22.639 [238/265] Linking target drivers/librte_bus_vdev.so.24.0 00:04:22.898 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:04:22.898 [240/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:04:22.898 [241/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:04:22.898 [242/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:04:22.898 [243/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:04:22.898 [244/265] Linking target lib/librte_rcu.so.24.0 00:04:22.898 [245/265] Linking target lib/librte_mempool.so.24.0 00:04:22.898 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:04:22.898 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:04:22.898 [248/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:04:23.156 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:04:23.156 [250/265] Linking target lib/librte_mbuf.so.24.0 00:04:23.156 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:04:23.156 [252/265] Linking target lib/librte_compressdev.so.24.0 00:04:23.156 [253/265] Linking target lib/librte_reorder.so.24.0 00:04:23.156 [254/265] Linking target lib/librte_net.so.24.0 00:04:23.156 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:04:23.414 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:04:23.414 [257/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:04:23.414 [258/265] Linking target lib/librte_security.so.24.0 00:04:23.414 [259/265] Linking target lib/librte_cmdline.so.24.0 00:04:23.414 [260/265] Linking target lib/librte_hash.so.24.0 00:04:23.414 [261/265] Linking target lib/librte_ethdev.so.24.0 00:04:23.672 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:04:23.672 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:04:23.672 [264/265] Linking target lib/librte_power.so.24.0 00:04:23.672 [265/265] Linking target lib/librte_vhost.so.24.0 00:04:23.672 INFO: autodetecting backend as ninja 00:04:23.672 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:04:24.607 CC lib/ut_mock/mock.o 00:04:24.607 CC lib/log/log.o 00:04:24.607 CC lib/log/log_deprecated.o 00:04:24.607 CC lib/log/log_flags.o 00:04:24.607 CC lib/ut/ut.o 00:04:24.866 LIB libspdk_ut_mock.a 00:04:24.866 LIB libspdk_log.a 00:04:24.866 SO libspdk_ut_mock.so.5.0 00:04:24.866 LIB libspdk_ut.a 00:04:24.866 SO libspdk_log.so.6.1 00:04:24.866 SO libspdk_ut.so.1.0 00:04:24.866 SYMLINK libspdk_ut_mock.so 00:04:24.866 SYMLINK libspdk_log.so 00:04:24.866 SYMLINK libspdk_ut.so 00:04:25.125 CXX lib/trace_parser/trace.o 00:04:25.125 CC lib/dma/dma.o 00:04:25.125 CC lib/ioat/ioat.o 00:04:25.125 CC lib/util/base64.o 00:04:25.125 CC lib/util/bit_array.o 00:04:25.125 CC lib/util/cpuset.o 00:04:25.125 CC lib/util/crc16.o 00:04:25.125 CC lib/util/crc32.o 00:04:25.125 CC lib/util/crc32c.o 00:04:25.125 CC lib/util/crc32_ieee.o 00:04:25.125 CC lib/util/crc64.o 00:04:25.125 CC lib/util/dif.o 00:04:25.125 CC lib/util/fd.o 00:04:25.125 CC lib/util/file.o 00:04:25.125 CC lib/util/hexlify.o 00:04:25.125 CC lib/util/iov.o 00:04:25.125 CC lib/util/math.o 00:04:25.125 CC lib/util/pipe.o 00:04:25.125 CC lib/util/strerror_tls.o 00:04:25.125 CC lib/util/string.o 00:04:25.125 CC lib/util/uuid.o 00:04:25.125 CC lib/util/fd_group.o 00:04:25.125 CC lib/util/xor.o 00:04:25.125 CC lib/util/zipf.o 00:04:25.125 CC lib/vfio_user/host/vfio_user_pci.o 00:04:25.125 CC lib/vfio_user/host/vfio_user.o 00:04:25.384 LIB libspdk_dma.a 00:04:25.384 SO libspdk_dma.so.3.0 00:04:25.384 SYMLINK libspdk_dma.so 00:04:25.384 LIB libspdk_ioat.a 00:04:25.384 SO libspdk_ioat.so.6.0 00:04:25.384 LIB libspdk_vfio_user.a 00:04:25.643 SO libspdk_vfio_user.so.4.0 00:04:25.643 SYMLINK libspdk_ioat.so 00:04:25.643 SYMLINK libspdk_vfio_user.so 00:04:25.643 LIB libspdk_util.a 00:04:25.643 SO libspdk_util.so.8.0 00:04:25.901 SYMLINK libspdk_util.so 00:04:25.901 LIB libspdk_trace_parser.a 00:04:25.901 SO libspdk_trace_parser.so.4.0 00:04:26.159 CC lib/rdma/common.o 00:04:26.159 CC lib/rdma/rdma_verbs.o 00:04:26.159 SYMLINK libspdk_trace_parser.so 00:04:26.159 CC lib/vmd/vmd.o 00:04:26.159 CC lib/vmd/led.o 00:04:26.159 CC lib/json/json_util.o 00:04:26.159 CC lib/idxd/idxd.o 00:04:26.159 CC lib/json/json_parse.o 00:04:26.159 CC lib/idxd/idxd_user.o 00:04:26.159 CC lib/json/json_write.o 00:04:26.159 CC lib/env_dpdk/env.o 00:04:26.159 CC lib/env_dpdk/memory.o 00:04:26.159 CC lib/env_dpdk/pci.o 00:04:26.159 CC lib/conf/conf.o 00:04:26.159 CC lib/env_dpdk/init.o 00:04:26.159 CC lib/env_dpdk/threads.o 00:04:26.159 CC lib/env_dpdk/pci_ioat.o 00:04:26.159 CC lib/env_dpdk/pci_virtio.o 00:04:26.159 CC lib/env_dpdk/pci_vmd.o 00:04:26.159 CC lib/env_dpdk/pci_idxd.o 00:04:26.159 CC lib/env_dpdk/pci_event.o 00:04:26.159 CC lib/env_dpdk/sigbus_handler.o 00:04:26.159 CC lib/env_dpdk/pci_dpdk.o 00:04:26.159 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:26.159 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:26.418 LIB libspdk_conf.a 00:04:26.418 LIB libspdk_rdma.a 00:04:26.418 SO libspdk_conf.so.5.0 00:04:26.418 SO libspdk_rdma.so.5.0 00:04:26.418 LIB libspdk_json.a 00:04:26.418 SYMLINK libspdk_conf.so 00:04:26.418 SO libspdk_json.so.5.1 00:04:26.418 SYMLINK libspdk_rdma.so 00:04:26.418 LIB libspdk_idxd.a 00:04:26.418 SYMLINK libspdk_json.so 00:04:26.684 SO libspdk_idxd.so.11.0 00:04:26.684 SYMLINK libspdk_idxd.so 00:04:26.684 CC lib/jsonrpc/jsonrpc_server.o 00:04:26.684 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:26.684 CC lib/jsonrpc/jsonrpc_client.o 00:04:26.684 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:26.684 LIB libspdk_vmd.a 00:04:26.684 SO libspdk_vmd.so.5.0 00:04:26.942 SYMLINK libspdk_vmd.so 00:04:26.942 LIB libspdk_jsonrpc.a 00:04:26.942 SO libspdk_jsonrpc.so.5.1 00:04:27.216 SYMLINK libspdk_jsonrpc.so 00:04:27.216 CC lib/rpc/rpc.o 00:04:27.502 LIB libspdk_env_dpdk.a 00:04:27.502 LIB libspdk_rpc.a 00:04:27.502 SO libspdk_rpc.so.5.0 00:04:27.502 SO libspdk_env_dpdk.so.13.0 00:04:27.773 SYMLINK libspdk_rpc.so 00:04:27.773 SYMLINK libspdk_env_dpdk.so 00:04:27.773 CC lib/trace/trace.o 00:04:27.773 CC lib/trace/trace_flags.o 00:04:27.773 CC lib/notify/notify.o 00:04:27.773 CC lib/notify/notify_rpc.o 00:04:27.773 CC lib/trace/trace_rpc.o 00:04:27.773 CC lib/sock/sock.o 00:04:27.773 CC lib/sock/sock_rpc.o 00:04:28.032 LIB libspdk_notify.a 00:04:28.032 SO libspdk_notify.so.5.0 00:04:28.032 LIB libspdk_trace.a 00:04:28.032 SYMLINK libspdk_notify.so 00:04:28.032 SO libspdk_trace.so.9.0 00:04:28.291 SYMLINK libspdk_trace.so 00:04:28.291 LIB libspdk_sock.a 00:04:28.291 SO libspdk_sock.so.8.0 00:04:28.291 SYMLINK libspdk_sock.so 00:04:28.291 CC lib/thread/thread.o 00:04:28.291 CC lib/thread/iobuf.o 00:04:28.549 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:28.549 CC lib/nvme/nvme_ctrlr.o 00:04:28.549 CC lib/nvme/nvme_ns_cmd.o 00:04:28.549 CC lib/nvme/nvme_fabric.o 00:04:28.549 CC lib/nvme/nvme_ns.o 00:04:28.549 CC lib/nvme/nvme_pcie_common.o 00:04:28.549 CC lib/nvme/nvme_pcie.o 00:04:28.549 CC lib/nvme/nvme_qpair.o 00:04:28.549 CC lib/nvme/nvme.o 00:04:28.549 CC lib/nvme/nvme_quirks.o 00:04:28.549 CC lib/nvme/nvme_transport.o 00:04:28.549 CC lib/nvme/nvme_discovery.o 00:04:28.549 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:28.549 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:28.549 CC lib/nvme/nvme_tcp.o 00:04:28.549 CC lib/nvme/nvme_opal.o 00:04:28.549 CC lib/nvme/nvme_io_msg.o 00:04:28.549 CC lib/nvme/nvme_poll_group.o 00:04:28.549 CC lib/nvme/nvme_zns.o 00:04:28.549 CC lib/nvme/nvme_cuse.o 00:04:28.549 CC lib/nvme/nvme_vfio_user.o 00:04:28.549 CC lib/nvme/nvme_rdma.o 00:04:29.925 LIB libspdk_thread.a 00:04:29.925 SO libspdk_thread.so.9.0 00:04:29.925 SYMLINK libspdk_thread.so 00:04:29.925 LIB libspdk_nvme.a 00:04:30.182 SO libspdk_nvme.so.12.0 00:04:30.182 CC lib/blob/blobstore.o 00:04:30.182 CC lib/blob/request.o 00:04:30.182 CC lib/blob/zeroes.o 00:04:30.182 CC lib/blob/blob_bs_dev.o 00:04:30.183 CC lib/accel/accel.o 00:04:30.183 CC lib/accel/accel_rpc.o 00:04:30.183 CC lib/accel/accel_sw.o 00:04:30.183 CC lib/init/json_config.o 00:04:30.183 CC lib/init/subsystem.o 00:04:30.183 CC lib/init/subsystem_rpc.o 00:04:30.183 CC lib/init/rpc.o 00:04:30.183 CC lib/virtio/virtio_vhost_user.o 00:04:30.183 CC lib/virtio/virtio.o 00:04:30.183 CC lib/virtio/virtio_vfio_user.o 00:04:30.183 CC lib/virtio/virtio_pci.o 00:04:30.441 SYMLINK libspdk_nvme.so 00:04:30.441 LIB libspdk_init.a 00:04:30.441 SO libspdk_init.so.4.0 00:04:30.441 LIB libspdk_virtio.a 00:04:30.441 SYMLINK libspdk_init.so 00:04:30.441 SO libspdk_virtio.so.6.0 00:04:30.699 SYMLINK libspdk_virtio.so 00:04:30.699 CC lib/event/app.o 00:04:30.699 CC lib/event/reactor.o 00:04:30.699 CC lib/event/log_rpc.o 00:04:30.699 CC lib/event/app_rpc.o 00:04:30.699 CC lib/event/scheduler_static.o 00:04:31.265 LIB libspdk_event.a 00:04:31.265 LIB libspdk_accel.a 00:04:31.265 SO libspdk_event.so.12.0 00:04:31.265 SO libspdk_accel.so.14.0 00:04:31.265 SYMLINK libspdk_event.so 00:04:31.265 SYMLINK libspdk_accel.so 00:04:31.523 CC lib/bdev/bdev.o 00:04:31.523 CC lib/bdev/bdev_rpc.o 00:04:31.523 CC lib/bdev/bdev_zone.o 00:04:31.523 CC lib/bdev/part.o 00:04:31.523 CC lib/bdev/scsi_nvme.o 00:04:32.904 LIB libspdk_blob.a 00:04:32.904 SO libspdk_blob.so.10.1 00:04:32.904 SYMLINK libspdk_blob.so 00:04:33.162 LIB libspdk_bdev.a 00:04:33.162 SO libspdk_bdev.so.14.0 00:04:33.162 CC lib/blobfs/blobfs.o 00:04:33.162 CC lib/blobfs/tree.o 00:04:33.162 CC lib/lvol/lvol.o 00:04:33.420 SYMLINK libspdk_bdev.so 00:04:33.420 CC lib/scsi/dev.o 00:04:33.420 CC lib/nvmf/ctrlr.o 00:04:33.420 CC lib/scsi/lun.o 00:04:33.420 CC lib/nvmf/ctrlr_discovery.o 00:04:33.420 CC lib/scsi/port.o 00:04:33.420 CC lib/nvmf/ctrlr_bdev.o 00:04:33.420 CC lib/nvmf/nvmf.o 00:04:33.420 CC lib/nvmf/subsystem.o 00:04:33.420 CC lib/scsi/scsi.o 00:04:33.420 CC lib/scsi/scsi_bdev.o 00:04:33.420 CC lib/nvmf/nvmf_rpc.o 00:04:33.420 CC lib/scsi/scsi_pr.o 00:04:33.420 CC lib/nvmf/transport.o 00:04:33.420 CC lib/scsi/scsi_rpc.o 00:04:33.420 CC lib/scsi/task.o 00:04:33.420 CC lib/nvmf/tcp.o 00:04:33.421 CC lib/nvmf/rdma.o 00:04:33.421 CC lib/nbd/nbd.o 00:04:33.421 CC lib/ublk/ublk.o 00:04:33.421 CC lib/nbd/nbd_rpc.o 00:04:33.421 CC lib/ublk/ublk_rpc.o 00:04:33.421 CC lib/ftl/ftl_core.o 00:04:33.421 CC lib/ftl/ftl_init.o 00:04:33.421 CC lib/ftl/ftl_layout.o 00:04:33.421 CC lib/ftl/ftl_debug.o 00:04:33.421 CC lib/ftl/ftl_io.o 00:04:33.421 CC lib/ftl/ftl_sb.o 00:04:33.421 CC lib/ftl/ftl_l2p.o 00:04:33.421 CC lib/ftl/ftl_l2p_flat.o 00:04:33.421 CC lib/ftl/ftl_band.o 00:04:33.421 CC lib/ftl/ftl_nv_cache.o 00:04:33.421 CC lib/ftl/ftl_writer.o 00:04:33.421 CC lib/ftl/ftl_band_ops.o 00:04:33.421 CC lib/ftl/ftl_rq.o 00:04:33.421 CC lib/ftl/ftl_reloc.o 00:04:33.421 CC lib/ftl/ftl_l2p_cache.o 00:04:33.421 CC lib/ftl/ftl_p2l.o 00:04:33.421 CC lib/ftl/mngt/ftl_mngt.o 00:04:33.421 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:33.421 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:33.421 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:33.681 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:33.681 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:33.681 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:33.681 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:33.681 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:33.681 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:33.681 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:33.681 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:33.681 CC lib/ftl/utils/ftl_conf.o 00:04:33.681 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:33.681 CC lib/ftl/utils/ftl_md.o 00:04:33.681 CC lib/ftl/utils/ftl_mempool.o 00:04:33.681 CC lib/ftl/utils/ftl_bitmap.o 00:04:33.681 CC lib/ftl/utils/ftl_property.o 00:04:33.681 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:33.681 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:33.681 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:33.681 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:33.681 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:33.682 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:33.682 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:33.682 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:33.682 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:33.682 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:33.682 CC lib/ftl/base/ftl_base_bdev.o 00:04:33.682 CC lib/ftl/base/ftl_base_dev.o 00:04:33.682 CC lib/ftl/ftl_trace.o 00:04:33.942 LIB libspdk_nbd.a 00:04:33.942 LIB libspdk_blobfs.a 00:04:33.942 SO libspdk_nbd.so.6.0 00:04:34.200 SO libspdk_blobfs.so.9.0 00:04:34.200 LIB libspdk_scsi.a 00:04:34.200 SYMLINK libspdk_nbd.so 00:04:34.200 SYMLINK libspdk_blobfs.so 00:04:34.200 LIB libspdk_lvol.a 00:04:34.200 SO libspdk_scsi.so.8.0 00:04:34.200 SO libspdk_lvol.so.9.1 00:04:34.200 SYMLINK libspdk_lvol.so 00:04:34.200 LIB libspdk_ublk.a 00:04:34.200 SYMLINK libspdk_scsi.so 00:04:34.200 SO libspdk_ublk.so.2.0 00:04:34.459 SYMLINK libspdk_ublk.so 00:04:34.459 CC lib/iscsi/conn.o 00:04:34.459 CC lib/iscsi/init_grp.o 00:04:34.459 CC lib/iscsi/iscsi.o 00:04:34.459 CC lib/iscsi/md5.o 00:04:34.459 CC lib/iscsi/param.o 00:04:34.459 CC lib/iscsi/portal_grp.o 00:04:34.459 CC lib/iscsi/tgt_node.o 00:04:34.459 CC lib/iscsi/iscsi_subsystem.o 00:04:34.459 CC lib/iscsi/iscsi_rpc.o 00:04:34.459 CC lib/iscsi/task.o 00:04:34.459 CC lib/vhost/vhost.o 00:04:34.459 CC lib/vhost/vhost_rpc.o 00:04:34.459 CC lib/vhost/vhost_scsi.o 00:04:34.459 CC lib/vhost/vhost_blk.o 00:04:34.459 CC lib/vhost/rte_vhost_user.o 00:04:34.717 LIB libspdk_ftl.a 00:04:34.717 SO libspdk_ftl.so.8.0 00:04:35.283 SYMLINK libspdk_ftl.so 00:04:35.542 LIB libspdk_vhost.a 00:04:35.542 SO libspdk_vhost.so.7.1 00:04:35.800 LIB libspdk_nvmf.a 00:04:35.800 SYMLINK libspdk_vhost.so 00:04:35.800 SO libspdk_nvmf.so.17.0 00:04:35.800 LIB libspdk_iscsi.a 00:04:35.800 SO libspdk_iscsi.so.7.0 00:04:36.057 SYMLINK libspdk_nvmf.so 00:04:36.057 SYMLINK libspdk_iscsi.so 00:04:36.315 CC module/env_dpdk/env_dpdk_rpc.o 00:04:36.577 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:36.577 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:36.577 CC module/accel/iaa/accel_iaa.o 00:04:36.577 CC module/accel/iaa/accel_iaa_rpc.o 00:04:36.577 CC module/accel/ioat/accel_ioat_rpc.o 00:04:36.577 CC module/accel/ioat/accel_ioat.o 00:04:36.577 CC module/accel/dsa/accel_dsa.o 00:04:36.577 CC module/accel/error/accel_error.o 00:04:36.577 CC module/accel/dsa/accel_dsa_rpc.o 00:04:36.577 CC module/accel/error/accel_error_rpc.o 00:04:36.577 CC module/scheduler/gscheduler/gscheduler.o 00:04:36.577 CC module/sock/posix/posix.o 00:04:36.577 CC module/blob/bdev/blob_bdev.o 00:04:36.577 LIB libspdk_env_dpdk_rpc.a 00:04:36.577 SO libspdk_env_dpdk_rpc.so.5.0 00:04:36.577 SYMLINK libspdk_env_dpdk_rpc.so 00:04:36.577 LIB libspdk_scheduler_dpdk_governor.a 00:04:36.577 LIB libspdk_scheduler_gscheduler.a 00:04:36.577 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:36.577 SO libspdk_scheduler_gscheduler.so.3.0 00:04:36.577 LIB libspdk_accel_error.a 00:04:36.577 LIB libspdk_scheduler_dynamic.a 00:04:36.577 LIB libspdk_accel_ioat.a 00:04:36.836 LIB libspdk_accel_iaa.a 00:04:36.836 SO libspdk_accel_ioat.so.5.0 00:04:36.836 SO libspdk_accel_error.so.1.0 00:04:36.836 SO libspdk_scheduler_dynamic.so.3.0 00:04:36.836 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:36.836 LIB libspdk_accel_dsa.a 00:04:36.836 SYMLINK libspdk_scheduler_gscheduler.so 00:04:36.836 SO libspdk_accel_iaa.so.2.0 00:04:36.836 LIB libspdk_blob_bdev.a 00:04:36.836 SO libspdk_accel_dsa.so.4.0 00:04:36.836 SYMLINK libspdk_accel_ioat.so 00:04:36.836 SYMLINK libspdk_scheduler_dynamic.so 00:04:36.836 SYMLINK libspdk_accel_error.so 00:04:36.836 SO libspdk_blob_bdev.so.10.1 00:04:36.836 SYMLINK libspdk_accel_iaa.so 00:04:36.836 SYMLINK libspdk_accel_dsa.so 00:04:36.836 SYMLINK libspdk_blob_bdev.so 00:04:37.096 CC module/bdev/passthru/vbdev_passthru.o 00:04:37.096 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:37.096 CC module/bdev/error/vbdev_error.o 00:04:37.096 CC module/bdev/delay/vbdev_delay.o 00:04:37.096 CC module/bdev/error/vbdev_error_rpc.o 00:04:37.096 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:37.096 CC module/bdev/aio/bdev_aio.o 00:04:37.096 CC module/bdev/aio/bdev_aio_rpc.o 00:04:37.096 CC module/bdev/ftl/bdev_ftl.o 00:04:37.096 CC module/blobfs/bdev/blobfs_bdev.o 00:04:37.096 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:37.096 CC module/bdev/nvme/bdev_nvme.o 00:04:37.096 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:37.096 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:37.096 CC module/bdev/nvme/nvme_rpc.o 00:04:37.096 CC module/bdev/nvme/bdev_mdns_client.o 00:04:37.096 CC module/bdev/gpt/vbdev_gpt.o 00:04:37.096 CC module/bdev/split/vbdev_split.o 00:04:37.096 CC module/bdev/iscsi/bdev_iscsi.o 00:04:37.096 CC module/bdev/gpt/gpt.o 00:04:37.096 CC module/bdev/split/vbdev_split_rpc.o 00:04:37.096 CC module/bdev/nvme/vbdev_opal.o 00:04:37.096 CC module/bdev/lvol/vbdev_lvol.o 00:04:37.096 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:37.096 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:37.096 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:37.096 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:37.096 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:37.096 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:37.096 CC module/bdev/null/bdev_null.o 00:04:37.096 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:37.096 CC module/bdev/null/bdev_null_rpc.o 00:04:37.096 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:37.096 CC module/bdev/malloc/bdev_malloc.o 00:04:37.096 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:37.096 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:37.096 CC module/bdev/raid/bdev_raid.o 00:04:37.096 CC module/bdev/raid/bdev_raid_rpc.o 00:04:37.096 CC module/bdev/raid/raid0.o 00:04:37.096 CC module/bdev/raid/bdev_raid_sb.o 00:04:37.096 CC module/bdev/raid/raid1.o 00:04:37.096 CC module/bdev/raid/concat.o 00:04:37.355 LIB libspdk_sock_posix.a 00:04:37.355 SO libspdk_sock_posix.so.5.0 00:04:37.355 SYMLINK libspdk_sock_posix.so 00:04:37.355 LIB libspdk_blobfs_bdev.a 00:04:37.614 SO libspdk_blobfs_bdev.so.5.0 00:04:37.614 LIB libspdk_bdev_error.a 00:04:37.614 LIB libspdk_bdev_split.a 00:04:37.614 SYMLINK libspdk_blobfs_bdev.so 00:04:37.614 LIB libspdk_bdev_delay.a 00:04:37.614 LIB libspdk_bdev_gpt.a 00:04:37.614 SO libspdk_bdev_error.so.5.0 00:04:37.614 LIB libspdk_bdev_ftl.a 00:04:37.614 LIB libspdk_bdev_passthru.a 00:04:37.614 LIB libspdk_bdev_null.a 00:04:37.614 LIB libspdk_bdev_malloc.a 00:04:37.614 SO libspdk_bdev_split.so.5.0 00:04:37.614 LIB libspdk_bdev_aio.a 00:04:37.614 SO libspdk_bdev_null.so.5.0 00:04:37.614 SO libspdk_bdev_delay.so.5.0 00:04:37.614 SO libspdk_bdev_passthru.so.5.0 00:04:37.614 SO libspdk_bdev_gpt.so.5.0 00:04:37.614 SO libspdk_bdev_ftl.so.5.0 00:04:37.614 SO libspdk_bdev_malloc.so.5.0 00:04:37.614 SO libspdk_bdev_aio.so.5.0 00:04:37.614 LIB libspdk_bdev_iscsi.a 00:04:37.614 SYMLINK libspdk_bdev_error.so 00:04:37.614 SYMLINK libspdk_bdev_split.so 00:04:37.614 LIB libspdk_bdev_zone_block.a 00:04:37.614 SYMLINK libspdk_bdev_delay.so 00:04:37.614 SYMLINK libspdk_bdev_null.so 00:04:37.614 SO libspdk_bdev_iscsi.so.5.0 00:04:37.614 SYMLINK libspdk_bdev_passthru.so 00:04:37.614 SYMLINK libspdk_bdev_malloc.so 00:04:37.614 SYMLINK libspdk_bdev_gpt.so 00:04:37.614 SYMLINK libspdk_bdev_ftl.so 00:04:37.614 SO libspdk_bdev_zone_block.so.5.0 00:04:37.614 SYMLINK libspdk_bdev_aio.so 00:04:37.614 SYMLINK libspdk_bdev_iscsi.so 00:04:37.873 SYMLINK libspdk_bdev_zone_block.so 00:04:37.873 LIB libspdk_bdev_lvol.a 00:04:37.873 LIB libspdk_bdev_virtio.a 00:04:37.873 SO libspdk_bdev_lvol.so.5.0 00:04:37.873 SO libspdk_bdev_virtio.so.5.0 00:04:37.873 SYMLINK libspdk_bdev_lvol.so 00:04:37.873 SYMLINK libspdk_bdev_virtio.so 00:04:38.132 LIB libspdk_bdev_raid.a 00:04:38.132 SO libspdk_bdev_raid.so.5.0 00:04:38.390 SYMLINK libspdk_bdev_raid.so 00:04:39.329 LIB libspdk_bdev_nvme.a 00:04:39.329 SO libspdk_bdev_nvme.so.6.0 00:04:39.588 SYMLINK libspdk_bdev_nvme.so 00:04:39.853 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:39.854 CC module/event/subsystems/vmd/vmd.o 00:04:39.854 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:39.854 CC module/event/subsystems/scheduler/scheduler.o 00:04:39.854 CC module/event/subsystems/sock/sock.o 00:04:39.854 CC module/event/subsystems/iobuf/iobuf.o 00:04:39.854 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:40.112 LIB libspdk_event_vhost_blk.a 00:04:40.112 LIB libspdk_event_sock.a 00:04:40.112 LIB libspdk_event_vmd.a 00:04:40.112 LIB libspdk_event_scheduler.a 00:04:40.112 SO libspdk_event_vhost_blk.so.2.0 00:04:40.112 LIB libspdk_event_iobuf.a 00:04:40.112 SO libspdk_event_sock.so.4.0 00:04:40.112 SO libspdk_event_vmd.so.5.0 00:04:40.112 SO libspdk_event_scheduler.so.3.0 00:04:40.112 SO libspdk_event_iobuf.so.2.0 00:04:40.112 SYMLINK libspdk_event_vhost_blk.so 00:04:40.112 SYMLINK libspdk_event_sock.so 00:04:40.112 SYMLINK libspdk_event_scheduler.so 00:04:40.112 SYMLINK libspdk_event_vmd.so 00:04:40.112 SYMLINK libspdk_event_iobuf.so 00:04:40.372 CC module/event/subsystems/accel/accel.o 00:04:40.631 LIB libspdk_event_accel.a 00:04:40.631 SO libspdk_event_accel.so.5.0 00:04:40.631 SYMLINK libspdk_event_accel.so 00:04:40.891 CC module/event/subsystems/bdev/bdev.o 00:04:41.149 LIB libspdk_event_bdev.a 00:04:41.149 SO libspdk_event_bdev.so.5.0 00:04:41.149 SYMLINK libspdk_event_bdev.so 00:04:41.408 CC module/event/subsystems/nbd/nbd.o 00:04:41.408 CC module/event/subsystems/scsi/scsi.o 00:04:41.408 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:41.408 CC module/event/subsystems/ublk/ublk.o 00:04:41.408 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:41.408 LIB libspdk_event_nbd.a 00:04:41.408 SO libspdk_event_nbd.so.5.0 00:04:41.408 LIB libspdk_event_scsi.a 00:04:41.408 LIB libspdk_event_ublk.a 00:04:41.668 SO libspdk_event_scsi.so.5.0 00:04:41.668 SO libspdk_event_ublk.so.2.0 00:04:41.668 SYMLINK libspdk_event_nbd.so 00:04:41.668 LIB libspdk_event_nvmf.a 00:04:41.668 SYMLINK libspdk_event_ublk.so 00:04:41.668 SYMLINK libspdk_event_scsi.so 00:04:41.668 SO libspdk_event_nvmf.so.5.0 00:04:41.668 SYMLINK libspdk_event_nvmf.so 00:04:41.927 CC module/event/subsystems/iscsi/iscsi.o 00:04:41.927 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:41.927 LIB libspdk_event_vhost_scsi.a 00:04:41.927 LIB libspdk_event_iscsi.a 00:04:41.927 SO libspdk_event_vhost_scsi.so.2.0 00:04:41.927 SO libspdk_event_iscsi.so.5.0 00:04:42.187 SYMLINK libspdk_event_iscsi.so 00:04:42.187 SYMLINK libspdk_event_vhost_scsi.so 00:04:42.187 SO libspdk.so.5.0 00:04:42.187 SYMLINK libspdk.so 00:04:42.446 CXX app/trace/trace.o 00:04:42.447 CC app/spdk_nvme_discover/discovery_aer.o 00:04:42.447 CC app/spdk_lspci/spdk_lspci.o 00:04:42.447 CC app/spdk_nvme_perf/perf.o 00:04:42.447 TEST_HEADER include/spdk/accel_module.h 00:04:42.447 TEST_HEADER include/spdk/accel.h 00:04:42.447 TEST_HEADER include/spdk/assert.h 00:04:42.447 CC test/rpc_client/rpc_client_test.o 00:04:42.447 TEST_HEADER include/spdk/bdev_module.h 00:04:42.447 TEST_HEADER include/spdk/base64.h 00:04:42.447 TEST_HEADER include/spdk/bdev.h 00:04:42.447 TEST_HEADER include/spdk/barrier.h 00:04:42.447 CC app/trace_record/trace_record.o 00:04:42.447 TEST_HEADER include/spdk/bdev_zone.h 00:04:42.447 CC app/spdk_top/spdk_top.o 00:04:42.447 CC app/spdk_nvme_identify/identify.o 00:04:42.447 TEST_HEADER include/spdk/bit_array.h 00:04:42.447 TEST_HEADER include/spdk/bit_pool.h 00:04:42.447 TEST_HEADER include/spdk/blob_bdev.h 00:04:42.447 TEST_HEADER include/spdk/blobfs.h 00:04:42.447 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:42.447 TEST_HEADER include/spdk/blob.h 00:04:42.447 TEST_HEADER include/spdk/config.h 00:04:42.447 TEST_HEADER include/spdk/cpuset.h 00:04:42.447 TEST_HEADER include/spdk/conf.h 00:04:42.447 CC app/nvmf_tgt/nvmf_main.o 00:04:42.447 TEST_HEADER include/spdk/crc16.h 00:04:42.447 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:42.447 TEST_HEADER include/spdk/crc64.h 00:04:42.447 TEST_HEADER include/spdk/crc32.h 00:04:42.447 TEST_HEADER include/spdk/dif.h 00:04:42.447 TEST_HEADER include/spdk/dma.h 00:04:42.447 TEST_HEADER include/spdk/endian.h 00:04:42.447 TEST_HEADER include/spdk/env_dpdk.h 00:04:42.447 TEST_HEADER include/spdk/env.h 00:04:42.447 TEST_HEADER include/spdk/event.h 00:04:42.447 TEST_HEADER include/spdk/fd.h 00:04:42.447 TEST_HEADER include/spdk/fd_group.h 00:04:42.447 TEST_HEADER include/spdk/ftl.h 00:04:42.447 TEST_HEADER include/spdk/file.h 00:04:42.447 TEST_HEADER include/spdk/gpt_spec.h 00:04:42.447 TEST_HEADER include/spdk/histogram_data.h 00:04:42.447 TEST_HEADER include/spdk/hexlify.h 00:04:42.447 TEST_HEADER include/spdk/idxd.h 00:04:42.447 TEST_HEADER include/spdk/init.h 00:04:42.447 TEST_HEADER include/spdk/idxd_spec.h 00:04:42.447 TEST_HEADER include/spdk/ioat.h 00:04:42.447 TEST_HEADER include/spdk/ioat_spec.h 00:04:42.447 TEST_HEADER include/spdk/jsonrpc.h 00:04:42.447 TEST_HEADER include/spdk/iscsi_spec.h 00:04:42.447 TEST_HEADER include/spdk/json.h 00:04:42.447 TEST_HEADER include/spdk/log.h 00:04:42.447 TEST_HEADER include/spdk/likely.h 00:04:42.447 TEST_HEADER include/spdk/memory.h 00:04:42.447 CC app/iscsi_tgt/iscsi_tgt.o 00:04:42.447 TEST_HEADER include/spdk/mmio.h 00:04:42.447 TEST_HEADER include/spdk/lvol.h 00:04:42.447 TEST_HEADER include/spdk/nbd.h 00:04:42.447 TEST_HEADER include/spdk/notify.h 00:04:42.447 CC app/vhost/vhost.o 00:04:42.447 TEST_HEADER include/spdk/nvme.h 00:04:42.447 TEST_HEADER include/spdk/nvme_intel.h 00:04:42.447 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:42.447 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:42.447 TEST_HEADER include/spdk/nvme_spec.h 00:04:42.447 TEST_HEADER include/spdk/nvme_zns.h 00:04:42.447 CC app/spdk_dd/spdk_dd.o 00:04:42.447 TEST_HEADER include/spdk/nvmf.h 00:04:42.447 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:42.447 TEST_HEADER include/spdk/nvmf_spec.h 00:04:42.447 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:42.447 TEST_HEADER include/spdk/nvmf_transport.h 00:04:42.447 TEST_HEADER include/spdk/opal_spec.h 00:04:42.447 TEST_HEADER include/spdk/opal.h 00:04:42.447 TEST_HEADER include/spdk/pci_ids.h 00:04:42.447 TEST_HEADER include/spdk/pipe.h 00:04:42.447 TEST_HEADER include/spdk/reduce.h 00:04:42.447 TEST_HEADER include/spdk/queue.h 00:04:42.447 TEST_HEADER include/spdk/rpc.h 00:04:42.447 TEST_HEADER include/spdk/scheduler.h 00:04:42.447 TEST_HEADER include/spdk/scsi.h 00:04:42.447 TEST_HEADER include/spdk/scsi_spec.h 00:04:42.447 TEST_HEADER include/spdk/sock.h 00:04:42.447 TEST_HEADER include/spdk/thread.h 00:04:42.447 TEST_HEADER include/spdk/string.h 00:04:42.447 TEST_HEADER include/spdk/stdinc.h 00:04:42.447 TEST_HEADER include/spdk/trace.h 00:04:42.447 TEST_HEADER include/spdk/trace_parser.h 00:04:42.447 TEST_HEADER include/spdk/tree.h 00:04:42.447 TEST_HEADER include/spdk/ublk.h 00:04:42.447 TEST_HEADER include/spdk/uuid.h 00:04:42.447 TEST_HEADER include/spdk/util.h 00:04:42.447 TEST_HEADER include/spdk/version.h 00:04:42.447 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:42.447 CC app/spdk_tgt/spdk_tgt.o 00:04:42.447 TEST_HEADER include/spdk/vhost.h 00:04:42.447 TEST_HEADER include/spdk/xor.h 00:04:42.447 TEST_HEADER include/spdk/vmd.h 00:04:42.447 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:42.447 TEST_HEADER include/spdk/zipf.h 00:04:42.447 CXX test/cpp_headers/accel_module.o 00:04:42.447 CXX test/cpp_headers/assert.o 00:04:42.447 CXX test/cpp_headers/accel.o 00:04:42.447 CXX test/cpp_headers/barrier.o 00:04:42.447 CXX test/cpp_headers/bdev.o 00:04:42.447 CXX test/cpp_headers/bdev_module.o 00:04:42.447 CXX test/cpp_headers/base64.o 00:04:42.447 CXX test/cpp_headers/bdev_zone.o 00:04:42.447 CXX test/cpp_headers/bit_pool.o 00:04:42.447 CXX test/cpp_headers/bit_array.o 00:04:42.447 CXX test/cpp_headers/blobfs_bdev.o 00:04:42.715 CXX test/cpp_headers/blob_bdev.o 00:04:42.715 CXX test/cpp_headers/blob.o 00:04:42.715 CXX test/cpp_headers/conf.o 00:04:42.715 CXX test/cpp_headers/cpuset.o 00:04:42.715 CXX test/cpp_headers/config.o 00:04:42.715 CXX test/cpp_headers/blobfs.o 00:04:42.715 CXX test/cpp_headers/crc16.o 00:04:42.715 CXX test/cpp_headers/crc32.o 00:04:42.715 CXX test/cpp_headers/dif.o 00:04:42.715 CC examples/nvme/abort/abort.o 00:04:42.715 CXX test/cpp_headers/dma.o 00:04:42.715 CC examples/vmd/lsvmd/lsvmd.o 00:04:42.715 CXX test/cpp_headers/crc64.o 00:04:42.715 CXX test/cpp_headers/env_dpdk.o 00:04:42.715 CXX test/cpp_headers/env.o 00:04:42.715 CXX test/cpp_headers/event.o 00:04:42.715 CXX test/cpp_headers/fd_group.o 00:04:42.715 CXX test/cpp_headers/endian.o 00:04:42.715 CC examples/nvme/arbitration/arbitration.o 00:04:42.715 CXX test/cpp_headers/fd.o 00:04:42.715 CXX test/cpp_headers/ftl.o 00:04:42.715 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:42.715 CXX test/cpp_headers/file.o 00:04:42.715 CXX test/cpp_headers/hexlify.o 00:04:42.715 CC examples/util/zipf/zipf.o 00:04:42.715 CXX test/cpp_headers/gpt_spec.o 00:04:42.715 CXX test/cpp_headers/idxd.o 00:04:42.715 CXX test/cpp_headers/histogram_data.o 00:04:42.715 CC test/env/pci/pci_ut.o 00:04:42.715 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:42.715 CC examples/ioat/perf/perf.o 00:04:42.715 CXX test/cpp_headers/init.o 00:04:42.715 CXX test/cpp_headers/ioat.o 00:04:42.715 CXX test/cpp_headers/idxd_spec.o 00:04:42.715 CC examples/vmd/led/led.o 00:04:42.715 CC examples/accel/perf/accel_perf.o 00:04:42.715 CC examples/sock/hello_world/hello_sock.o 00:04:42.715 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:42.715 CC test/thread/poller_perf/poller_perf.o 00:04:42.715 CC test/app/histogram_perf/histogram_perf.o 00:04:42.715 CC test/nvme/reset/reset.o 00:04:42.715 CC examples/nvme/reconnect/reconnect.o 00:04:42.715 CC examples/ioat/verify/verify.o 00:04:42.715 CC test/env/vtophys/vtophys.o 00:04:42.715 CC test/nvme/err_injection/err_injection.o 00:04:42.715 CC examples/nvme/hello_world/hello_world.o 00:04:42.715 CC test/nvme/startup/startup.o 00:04:42.715 CC test/nvme/sgl/sgl.o 00:04:42.715 CC examples/idxd/perf/perf.o 00:04:42.715 CC test/event/event_perf/event_perf.o 00:04:42.715 CC app/fio/nvme/fio_plugin.o 00:04:42.715 CC examples/bdev/hello_world/hello_bdev.o 00:04:42.715 CC examples/blob/hello_world/hello_blob.o 00:04:42.715 CC examples/nvme/hotplug/hotplug.o 00:04:42.715 CC test/env/memory/memory_ut.o 00:04:42.715 CC test/nvme/cuse/cuse.o 00:04:42.715 CC test/event/reactor_perf/reactor_perf.o 00:04:42.715 CC test/app/stub/stub.o 00:04:42.715 CC examples/blob/cli/blobcli.o 00:04:42.715 CC test/accel/dif/dif.o 00:04:42.716 CC examples/thread/thread/thread_ex.o 00:04:42.716 CC test/event/app_repeat/app_repeat.o 00:04:42.716 CC test/nvme/overhead/overhead.o 00:04:42.716 CC test/event/reactor/reactor.o 00:04:42.716 CC examples/bdev/bdevperf/bdevperf.o 00:04:42.716 CC test/app/jsoncat/jsoncat.o 00:04:42.716 CC test/nvme/e2edp/nvme_dp.o 00:04:42.716 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:42.716 CXX test/cpp_headers/ioat_spec.o 00:04:42.716 CC test/blobfs/mkfs/mkfs.o 00:04:42.716 CC test/nvme/simple_copy/simple_copy.o 00:04:42.716 CC test/nvme/boot_partition/boot_partition.o 00:04:42.716 CC examples/nvmf/nvmf/nvmf.o 00:04:42.716 CC test/nvme/connect_stress/connect_stress.o 00:04:42.716 CC test/nvme/reserve/reserve.o 00:04:42.716 CC test/nvme/compliance/nvme_compliance.o 00:04:42.716 CC test/app/bdev_svc/bdev_svc.o 00:04:42.716 CC test/nvme/fused_ordering/fused_ordering.o 00:04:42.716 CC test/nvme/fdp/fdp.o 00:04:42.716 CC test/dma/test_dma/test_dma.o 00:04:42.716 CC test/nvme/aer/aer.o 00:04:42.716 CC test/bdev/bdevio/bdevio.o 00:04:42.716 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:42.716 CC test/event/scheduler/scheduler.o 00:04:42.716 CC app/fio/bdev/fio_plugin.o 00:04:42.975 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:42.975 LINK spdk_nvme_discover 00:04:42.975 LINK rpc_client_test 00:04:42.975 CC test/lvol/esnap/esnap.o 00:04:42.975 CC test/env/mem_callbacks/mem_callbacks.o 00:04:42.975 LINK spdk_lspci 00:04:42.975 LINK vhost 00:04:42.975 LINK lsvmd 00:04:43.241 LINK spdk_trace_record 00:04:43.241 LINK nvmf_tgt 00:04:43.241 LINK reactor_perf 00:04:43.241 LINK event_perf 00:04:43.241 LINK jsoncat 00:04:43.241 LINK app_repeat 00:04:43.241 LINK cmb_copy 00:04:43.241 LINK startup 00:04:43.241 CXX test/cpp_headers/iscsi_spec.o 00:04:43.241 LINK stub 00:04:43.241 LINK err_injection 00:04:43.241 LINK interrupt_tgt 00:04:43.241 LINK ioat_perf 00:04:43.241 CXX test/cpp_headers/json.o 00:04:43.241 CXX test/cpp_headers/jsonrpc.o 00:04:43.241 CXX test/cpp_headers/likely.o 00:04:43.241 LINK zipf 00:04:43.241 LINK doorbell_aers 00:04:43.241 CXX test/cpp_headers/memory.o 00:04:43.241 CXX test/cpp_headers/log.o 00:04:43.241 CXX test/cpp_headers/mmio.o 00:04:43.241 CXX test/cpp_headers/nbd.o 00:04:43.241 LINK boot_partition 00:04:43.241 CXX test/cpp_headers/lvol.o 00:04:43.241 LINK verify 00:04:43.241 CXX test/cpp_headers/notify.o 00:04:43.241 CXX test/cpp_headers/nvme.o 00:04:43.241 LINK reserve 00:04:43.241 CXX test/cpp_headers/nvme_ocssd.o 00:04:43.241 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:43.241 CXX test/cpp_headers/nvme_intel.o 00:04:43.241 LINK histogram_perf 00:04:43.241 LINK bdev_svc 00:04:43.241 LINK led 00:04:43.241 LINK hello_sock 00:04:43.241 LINK hello_blob 00:04:43.241 LINK pmr_persistence 00:04:43.241 LINK spdk_trace 00:04:43.241 LINK spdk_tgt 00:04:43.241 CXX test/cpp_headers/nvme_spec.o 00:04:43.241 CXX test/cpp_headers/nvme_zns.o 00:04:43.241 LINK reactor 00:04:43.241 LINK vtophys 00:04:43.241 LINK hotplug 00:04:43.241 LINK iscsi_tgt 00:04:43.241 CXX test/cpp_headers/nvmf_cmd.o 00:04:43.241 LINK poller_perf 00:04:43.241 LINK simple_copy 00:04:43.502 LINK reset 00:04:43.502 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:43.502 CXX test/cpp_headers/nvmf.o 00:04:43.502 LINK nvme_dp 00:04:43.502 CXX test/cpp_headers/nvmf_spec.o 00:04:43.502 CXX test/cpp_headers/nvmf_transport.o 00:04:43.502 CXX test/cpp_headers/opal.o 00:04:43.502 CXX test/cpp_headers/opal_spec.o 00:04:43.502 CXX test/cpp_headers/pci_ids.o 00:04:43.502 CXX test/cpp_headers/pipe.o 00:04:43.502 LINK env_dpdk_post_init 00:04:43.502 LINK connect_stress 00:04:43.502 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:43.502 LINK mkfs 00:04:43.502 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:43.502 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:43.502 LINK nvme_compliance 00:04:43.502 LINK fdp 00:04:43.502 LINK sgl 00:04:43.502 LINK abort 00:04:43.502 CXX test/cpp_headers/queue.o 00:04:43.502 CXX test/cpp_headers/reduce.o 00:04:43.502 CXX test/cpp_headers/rpc.o 00:04:43.502 LINK hello_world 00:04:43.502 CXX test/cpp_headers/scheduler.o 00:04:43.502 CXX test/cpp_headers/scsi.o 00:04:43.502 LINK dif 00:04:43.502 CXX test/cpp_headers/scsi_spec.o 00:04:43.502 CXX test/cpp_headers/sock.o 00:04:43.502 LINK fused_ordering 00:04:43.502 CXX test/cpp_headers/stdinc.o 00:04:43.502 LINK hello_bdev 00:04:43.502 CXX test/cpp_headers/string.o 00:04:43.502 CXX test/cpp_headers/thread.o 00:04:43.502 CXX test/cpp_headers/trace.o 00:04:43.502 CXX test/cpp_headers/trace_parser.o 00:04:43.502 LINK test_dma 00:04:43.502 CXX test/cpp_headers/tree.o 00:04:43.502 CXX test/cpp_headers/ublk.o 00:04:43.502 CXX test/cpp_headers/util.o 00:04:43.502 CXX test/cpp_headers/uuid.o 00:04:43.502 LINK scheduler 00:04:43.502 CXX test/cpp_headers/version.o 00:04:43.502 LINK pci_ut 00:04:43.502 CXX test/cpp_headers/vfio_user_pci.o 00:04:43.502 CXX test/cpp_headers/vfio_user_spec.o 00:04:43.502 CXX test/cpp_headers/vhost.o 00:04:43.502 LINK nvmf 00:04:43.502 CXX test/cpp_headers/vmd.o 00:04:43.502 LINK thread 00:04:43.761 CXX test/cpp_headers/zipf.o 00:04:43.761 CXX test/cpp_headers/xor.o 00:04:43.761 LINK idxd_perf 00:04:43.761 LINK overhead 00:04:43.761 LINK arbitration 00:04:43.761 LINK reconnect 00:04:43.761 LINK spdk_dd 00:04:43.761 LINK aer 00:04:43.761 LINK blobcli 00:04:43.761 LINK spdk_nvme 00:04:43.761 LINK bdevio 00:04:43.761 LINK accel_perf 00:04:43.761 LINK spdk_bdev 00:04:44.019 LINK spdk_nvme_identify 00:04:44.019 LINK spdk_nvme_perf 00:04:44.019 LINK nvme_manage 00:04:44.019 LINK nvme_fuzz 00:04:44.284 LINK vhost_fuzz 00:04:44.284 LINK mem_callbacks 00:04:44.284 LINK memory_ut 00:04:44.542 LINK spdk_top 00:04:44.542 LINK bdevperf 00:04:44.542 LINK cuse 00:04:45.479 LINK iscsi_fuzz 00:04:48.013 LINK esnap 00:04:48.272 00:04:48.272 real 0m46.319s 00:04:48.272 user 7m43.936s 00:04:48.272 sys 3m49.668s 00:04:48.272 10:01:21 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:48.273 10:01:21 -- common/autotest_common.sh@10 -- $ set +x 00:04:48.273 ************************************ 00:04:48.273 END TEST make 00:04:48.273 ************************************ 00:04:48.533 10:01:21 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:48.533 10:01:21 -- nvmf/common.sh@7 -- # uname -s 00:04:48.533 10:01:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:48.533 10:01:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:48.533 10:01:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:48.533 10:01:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:48.533 10:01:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:48.533 10:01:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:48.533 10:01:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:48.533 10:01:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:48.533 10:01:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:48.533 10:01:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:48.533 10:01:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:04:48.533 10:01:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:04:48.533 10:01:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:48.533 10:01:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:48.533 10:01:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:48.533 10:01:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:48.533 10:01:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:48.533 10:01:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:48.533 10:01:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:48.533 10:01:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.533 10:01:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.533 10:01:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.533 10:01:21 -- paths/export.sh@5 -- # export PATH 00:04:48.533 10:01:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.533 10:01:21 -- nvmf/common.sh@46 -- # : 0 00:04:48.533 10:01:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:48.533 10:01:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:48.533 10:01:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:48.533 10:01:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:48.533 10:01:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:48.533 10:01:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:48.533 10:01:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:48.533 10:01:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:48.533 10:01:21 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:48.533 10:01:21 -- spdk/autotest.sh@32 -- # uname -s 00:04:48.533 10:01:21 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:48.533 10:01:21 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:48.533 10:01:21 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:48.533 10:01:21 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:48.533 10:01:21 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:48.533 10:01:21 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:48.533 10:01:21 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:48.533 10:01:21 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:48.533 10:01:21 -- spdk/autotest.sh@48 -- # udevadm_pid=3214708 00:04:48.533 10:01:21 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:04:48.533 10:01:21 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:48.533 10:01:21 -- spdk/autotest.sh@54 -- # echo 3214710 00:04:48.533 10:01:21 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:04:48.533 10:01:21 -- spdk/autotest.sh@56 -- # echo 3214711 00:04:48.533 10:01:21 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:04:48.533 10:01:21 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:04:48.533 10:01:21 -- spdk/autotest.sh@60 -- # echo 3214712 00:04:48.533 10:01:21 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:04:48.533 10:01:21 -- spdk/autotest.sh@62 -- # echo 3214713 00:04:48.533 10:01:21 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:48.533 10:01:21 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:04:48.533 10:01:21 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:48.533 10:01:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:48.533 10:01:21 -- common/autotest_common.sh@10 -- # set +x 00:04:48.533 10:01:21 -- spdk/autotest.sh@70 -- # create_test_list 00:04:48.534 10:01:21 -- common/autotest_common.sh@736 -- # xtrace_disable 00:04:48.534 10:01:21 -- common/autotest_common.sh@10 -- # set +x 00:04:48.534 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:04:48.534 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:04:48.534 10:01:21 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:48.534 10:01:21 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:48.534 10:01:21 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:48.534 10:01:21 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:48.534 10:01:21 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:48.534 10:01:21 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:48.534 10:01:21 -- common/autotest_common.sh@1440 -- # uname 00:04:48.534 10:01:21 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:04:48.534 10:01:21 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:48.534 10:01:21 -- common/autotest_common.sh@1460 -- # uname 00:04:48.534 10:01:21 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:04:48.534 10:01:21 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:04:48.534 10:01:21 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:04:48.534 10:01:21 -- spdk/autotest.sh@83 -- # hash lcov 00:04:48.534 10:01:21 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:48.534 10:01:21 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:04:48.534 --rc lcov_branch_coverage=1 00:04:48.534 --rc lcov_function_coverage=1 00:04:48.534 --rc genhtml_branch_coverage=1 00:04:48.534 --rc genhtml_function_coverage=1 00:04:48.534 --rc genhtml_legend=1 00:04:48.534 --rc geninfo_all_blocks=1 00:04:48.534 ' 00:04:48.534 10:01:21 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:04:48.534 --rc lcov_branch_coverage=1 00:04:48.534 --rc lcov_function_coverage=1 00:04:48.534 --rc genhtml_branch_coverage=1 00:04:48.534 --rc genhtml_function_coverage=1 00:04:48.534 --rc genhtml_legend=1 00:04:48.534 --rc geninfo_all_blocks=1 00:04:48.534 ' 00:04:48.534 10:01:21 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:04:48.534 --rc lcov_branch_coverage=1 00:04:48.534 --rc lcov_function_coverage=1 00:04:48.534 --rc genhtml_branch_coverage=1 00:04:48.534 --rc genhtml_function_coverage=1 00:04:48.534 --rc genhtml_legend=1 00:04:48.534 --rc geninfo_all_blocks=1 00:04:48.534 --no-external' 00:04:48.534 10:01:21 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:04:48.534 --rc lcov_branch_coverage=1 00:04:48.534 --rc lcov_function_coverage=1 00:04:48.534 --rc genhtml_branch_coverage=1 00:04:48.534 --rc genhtml_function_coverage=1 00:04:48.534 --rc genhtml_legend=1 00:04:48.534 --rc geninfo_all_blocks=1 00:04:48.534 --no-external' 00:04:48.534 10:01:21 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:48.793 lcov: LCOV version 1.14 00:04:48.793 10:01:21 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:03.675 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:05:03.675 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:03.675 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:05:03.675 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:03.675 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:05:03.675 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:18.626 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:05:18.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:05:18.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:18.627 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:18.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:18.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:05:18.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:18.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:18.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:18.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:05:18.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:18.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:05:18.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:18.628 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:05:20.534 10:01:53 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:05:20.534 10:01:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:20.534 10:01:53 -- common/autotest_common.sh@10 -- # set +x 00:05:20.534 10:01:53 -- spdk/autotest.sh@102 -- # rm -f 00:05:20.534 10:01:53 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:23.824 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:05:23.824 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:05:23.824 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:05:23.824 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:05:23.824 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:05:23.824 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:05:23.824 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:05:23.824 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:05:23.824 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:05:23.824 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:05:23.824 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:05:23.824 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:05:23.824 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:05:23.824 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:05:23.824 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:05:23.824 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:05:23.824 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:05:23.824 10:01:56 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:05:23.824 10:01:56 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:23.824 10:01:56 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:23.824 10:01:56 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:23.824 10:01:56 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:23.824 10:01:56 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:23.824 10:01:56 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:23.824 10:01:56 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:23.824 10:01:56 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:23.824 10:01:56 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:05:23.824 10:01:56 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:05:23.824 10:01:56 -- spdk/autotest.sh@121 -- # grep -v p 00:05:23.824 10:01:56 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:23.824 10:01:56 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:23.824 10:01:56 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:05:23.824 10:01:56 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:23.824 10:01:56 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:23.824 No valid GPT data, bailing 00:05:23.824 10:01:57 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:23.824 10:01:57 -- scripts/common.sh@393 -- # pt= 00:05:23.824 10:01:57 -- scripts/common.sh@394 -- # return 1 00:05:23.824 10:01:57 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:23.824 1+0 records in 00:05:23.824 1+0 records out 00:05:23.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00548353 s, 191 MB/s 00:05:23.824 10:01:57 -- spdk/autotest.sh@129 -- # sync 00:05:23.824 10:01:57 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:23.824 10:01:57 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:23.824 10:01:57 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:30.425 10:02:02 -- spdk/autotest.sh@135 -- # uname -s 00:05:30.425 10:02:02 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:05:30.425 10:02:02 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:05:30.425 10:02:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.425 10:02:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.425 10:02:02 -- common/autotest_common.sh@10 -- # set +x 00:05:30.425 ************************************ 00:05:30.425 START TEST setup.sh 00:05:30.425 ************************************ 00:05:30.425 10:02:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:05:30.425 * Looking for test storage... 00:05:30.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:30.425 10:02:02 -- setup/test-setup.sh@10 -- # uname -s 00:05:30.425 10:02:02 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:30.425 10:02:02 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:05:30.425 10:02:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.425 10:02:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.425 10:02:02 -- common/autotest_common.sh@10 -- # set +x 00:05:30.425 ************************************ 00:05:30.425 START TEST acl 00:05:30.425 ************************************ 00:05:30.425 10:02:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:05:30.425 * Looking for test storage... 00:05:30.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:30.425 10:02:03 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:30.425 10:02:03 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:30.425 10:02:03 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:30.425 10:02:03 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:30.425 10:02:03 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:30.425 10:02:03 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:30.425 10:02:03 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:30.425 10:02:03 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:30.425 10:02:03 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:30.425 10:02:03 -- setup/acl.sh@12 -- # devs=() 00:05:30.425 10:02:03 -- setup/acl.sh@12 -- # declare -a devs 00:05:30.425 10:02:03 -- setup/acl.sh@13 -- # drivers=() 00:05:30.425 10:02:03 -- setup/acl.sh@13 -- # declare -A drivers 00:05:30.425 10:02:03 -- setup/acl.sh@51 -- # setup reset 00:05:30.425 10:02:03 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:30.425 10:02:03 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:32.961 10:02:06 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:32.961 10:02:06 -- setup/acl.sh@16 -- # local dev driver 00:05:32.961 10:02:06 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:32.961 10:02:06 -- setup/acl.sh@15 -- # setup output status 00:05:32.961 10:02:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.961 10:02:06 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:36.254 Hugepages 00:05:36.254 node hugesize free / total 00:05:36.254 10:02:08 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:36.254 10:02:08 -- setup/acl.sh@19 -- # continue 00:05:36.254 10:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.254 10:02:08 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:36.254 10:02:08 -- setup/acl.sh@19 -- # continue 00:05:36.254 10:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.254 10:02:08 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:36.254 10:02:08 -- setup/acl.sh@19 -- # continue 00:05:36.254 10:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.254 00:05:36.254 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:36.254 10:02:08 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:36.254 10:02:08 -- setup/acl.sh@19 -- # continue 00:05:36.254 10:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.254 10:02:08 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:05:36.254 10:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.254 10:02:08 -- setup/acl.sh@20 -- # continue 00:05:36.254 10:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.254 10:02:08 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:05:36.254 10:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.254 10:02:08 -- setup/acl.sh@20 -- # continue 00:05:36.254 10:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.254 10:02:08 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:05:36.254 10:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.254 10:02:08 -- setup/acl.sh@20 -- # continue 00:05:36.254 10:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.254 10:02:08 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:05:36.254 10:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.254 10:02:08 -- setup/acl.sh@20 -- # continue 00:05:36.254 10:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.254 10:02:08 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:05:36.254 10:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.254 10:02:08 -- setup/acl.sh@20 -- # continue 00:05:36.254 10:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.254 10:02:08 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:05:36.254 10:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.254 10:02:08 -- setup/acl.sh@20 -- # continue 00:05:36.254 10:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.254 10:02:08 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:05:36.254 10:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.254 10:02:08 -- setup/acl.sh@20 -- # continue 00:05:36.254 10:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.254 10:02:08 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:05:36.254 10:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.254 10:02:08 -- setup/acl.sh@20 -- # continue 00:05:36.254 10:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.254 10:02:08 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:05:36.254 10:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.254 10:02:08 -- setup/acl.sh@20 -- # continue 00:05:36.254 10:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.254 10:02:08 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:05:36.254 10:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.254 10:02:08 -- setup/acl.sh@20 -- # continue 00:05:36.254 10:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.254 10:02:08 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:05:36.254 10:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.254 10:02:08 -- setup/acl.sh@20 -- # continue 00:05:36.254 10:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.254 10:02:08 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:05:36.254 10:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.254 10:02:08 -- setup/acl.sh@20 -- # continue 00:05:36.254 10:02:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.254 10:02:09 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:05:36.254 10:02:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.254 10:02:09 -- setup/acl.sh@20 -- # continue 00:05:36.254 10:02:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.254 10:02:09 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:05:36.254 10:02:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.254 10:02:09 -- setup/acl.sh@20 -- # continue 00:05:36.254 10:02:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.254 10:02:09 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:05:36.254 10:02:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.254 10:02:09 -- setup/acl.sh@20 -- # continue 00:05:36.254 10:02:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.254 10:02:09 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:05:36.254 10:02:09 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:36.254 10:02:09 -- setup/acl.sh@20 -- # continue 00:05:36.254 10:02:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.254 10:02:09 -- setup/acl.sh@19 -- # [[ 0000:86:00.0 == *:*:*.* ]] 00:05:36.254 10:02:09 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:36.254 10:02:09 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\6\:\0\0\.\0* ]] 00:05:36.254 10:02:09 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:36.254 10:02:09 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:36.254 10:02:09 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.254 10:02:09 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:36.254 10:02:09 -- setup/acl.sh@54 -- # run_test denied denied 00:05:36.254 10:02:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:36.254 10:02:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.254 10:02:09 -- common/autotest_common.sh@10 -- # set +x 00:05:36.254 ************************************ 00:05:36.254 START TEST denied 00:05:36.254 ************************************ 00:05:36.254 10:02:09 -- common/autotest_common.sh@1104 -- # denied 00:05:36.254 10:02:09 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:86:00.0' 00:05:36.254 10:02:09 -- setup/acl.sh@38 -- # setup output config 00:05:36.254 10:02:09 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:86:00.0' 00:05:36.254 10:02:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.254 10:02:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:39.544 0000:86:00.0 (8086 0a54): Skipping denied controller at 0000:86:00.0 00:05:39.544 10:02:12 -- setup/acl.sh@40 -- # verify 0000:86:00.0 00:05:39.544 10:02:12 -- setup/acl.sh@28 -- # local dev driver 00:05:39.544 10:02:12 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:39.544 10:02:12 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:86:00.0 ]] 00:05:39.544 10:02:12 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:86:00.0/driver 00:05:39.544 10:02:12 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:39.544 10:02:12 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:39.544 10:02:12 -- setup/acl.sh@41 -- # setup reset 00:05:39.544 10:02:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:39.544 10:02:12 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:43.739 00:05:43.739 real 0m7.135s 00:05:43.739 user 0m2.352s 00:05:43.739 sys 0m4.019s 00:05:43.739 10:02:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.739 10:02:16 -- common/autotest_common.sh@10 -- # set +x 00:05:43.739 ************************************ 00:05:43.739 END TEST denied 00:05:43.739 ************************************ 00:05:43.739 10:02:16 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:43.739 10:02:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.739 10:02:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.739 10:02:16 -- common/autotest_common.sh@10 -- # set +x 00:05:43.739 ************************************ 00:05:43.739 START TEST allowed 00:05:43.739 ************************************ 00:05:43.739 10:02:16 -- common/autotest_common.sh@1104 -- # allowed 00:05:43.739 10:02:16 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:86:00.0 00:05:43.739 10:02:16 -- setup/acl.sh@46 -- # grep -E '0000:86:00.0 .*: nvme -> .*' 00:05:43.739 10:02:16 -- setup/acl.sh@45 -- # setup output config 00:05:43.739 10:02:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.739 10:02:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:47.030 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:05:47.030 10:02:20 -- setup/acl.sh@47 -- # verify 00:05:47.030 10:02:20 -- setup/acl.sh@28 -- # local dev driver 00:05:47.030 10:02:20 -- setup/acl.sh@48 -- # setup reset 00:05:47.030 10:02:20 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:47.030 10:02:20 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:50.321 00:05:50.321 real 0m7.245s 00:05:50.321 user 0m2.243s 00:05:50.321 sys 0m4.098s 00:05:50.321 10:02:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.321 10:02:23 -- common/autotest_common.sh@10 -- # set +x 00:05:50.321 ************************************ 00:05:50.321 END TEST allowed 00:05:50.321 ************************************ 00:05:50.321 00:05:50.321 real 0m20.567s 00:05:50.321 user 0m6.871s 00:05:50.321 sys 0m12.245s 00:05:50.321 10:02:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.321 10:02:23 -- common/autotest_common.sh@10 -- # set +x 00:05:50.321 ************************************ 00:05:50.321 END TEST acl 00:05:50.321 ************************************ 00:05:50.321 10:02:23 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:50.321 10:02:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:50.321 10:02:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:50.321 10:02:23 -- common/autotest_common.sh@10 -- # set +x 00:05:50.321 ************************************ 00:05:50.321 START TEST hugepages 00:05:50.321 ************************************ 00:05:50.321 10:02:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:50.582 * Looking for test storage... 00:05:50.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:50.582 10:02:23 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:50.582 10:02:23 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:50.582 10:02:23 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:50.582 10:02:23 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:50.582 10:02:23 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:50.582 10:02:23 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:50.582 10:02:23 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:50.582 10:02:23 -- setup/common.sh@18 -- # local node= 00:05:50.582 10:02:23 -- setup/common.sh@19 -- # local var val 00:05:50.582 10:02:23 -- setup/common.sh@20 -- # local mem_f mem 00:05:50.582 10:02:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:50.582 10:02:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:50.582 10:02:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:50.582 10:02:23 -- setup/common.sh@28 -- # mapfile -t mem 00:05:50.582 10:02:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.582 10:02:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 67735168 kB' 'MemAvailable: 71707152 kB' 'Buffers: 2696 kB' 'Cached: 16110520 kB' 'SwapCached: 0 kB' 'Active: 13008552 kB' 'Inactive: 3680104 kB' 'Active(anon): 12384096 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578740 kB' 'Mapped: 193108 kB' 'Shmem: 11808656 kB' 'KReclaimable: 503384 kB' 'Slab: 1159804 kB' 'SReclaimable: 503384 kB' 'SUnreclaim: 656420 kB' 'KernelStack: 22704 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52434752 kB' 'Committed_AS: 13819504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220772 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.582 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.582 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # continue 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:50.583 10:02:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:50.583 10:02:23 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:50.583 10:02:23 -- setup/common.sh@33 -- # echo 2048 00:05:50.583 10:02:23 -- setup/common.sh@33 -- # return 0 00:05:50.583 10:02:23 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:50.583 10:02:23 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:50.583 10:02:23 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:50.583 10:02:23 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:50.583 10:02:23 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:50.583 10:02:23 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:50.583 10:02:23 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:50.583 10:02:23 -- setup/hugepages.sh@207 -- # get_nodes 00:05:50.583 10:02:23 -- setup/hugepages.sh@27 -- # local node 00:05:50.583 10:02:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:50.583 10:02:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:50.583 10:02:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:50.583 10:02:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:50.583 10:02:23 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:50.583 10:02:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:50.583 10:02:23 -- setup/hugepages.sh@208 -- # clear_hp 00:05:50.583 10:02:23 -- setup/hugepages.sh@37 -- # local node hp 00:05:50.583 10:02:23 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:50.583 10:02:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:50.583 10:02:23 -- setup/hugepages.sh@41 -- # echo 0 00:05:50.583 10:02:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:50.583 10:02:23 -- setup/hugepages.sh@41 -- # echo 0 00:05:50.583 10:02:23 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:50.583 10:02:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:50.583 10:02:23 -- setup/hugepages.sh@41 -- # echo 0 00:05:50.583 10:02:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:50.583 10:02:23 -- setup/hugepages.sh@41 -- # echo 0 00:05:50.583 10:02:23 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:50.583 10:02:23 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:50.583 10:02:23 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:50.583 10:02:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:50.583 10:02:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:50.583 10:02:23 -- common/autotest_common.sh@10 -- # set +x 00:05:50.583 ************************************ 00:05:50.583 START TEST default_setup 00:05:50.583 ************************************ 00:05:50.583 10:02:23 -- common/autotest_common.sh@1104 -- # default_setup 00:05:50.583 10:02:23 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:50.583 10:02:23 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:50.583 10:02:23 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:50.583 10:02:23 -- setup/hugepages.sh@51 -- # shift 00:05:50.583 10:02:23 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:50.583 10:02:23 -- setup/hugepages.sh@52 -- # local node_ids 00:05:50.583 10:02:23 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:50.584 10:02:23 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:50.584 10:02:23 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:50.584 10:02:23 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:50.584 10:02:23 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:50.584 10:02:23 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:50.584 10:02:23 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:50.584 10:02:23 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:50.584 10:02:23 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:50.584 10:02:23 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:50.584 10:02:23 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:50.584 10:02:23 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:50.584 10:02:23 -- setup/hugepages.sh@73 -- # return 0 00:05:50.584 10:02:23 -- setup/hugepages.sh@137 -- # setup output 00:05:50.584 10:02:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:50.584 10:02:23 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:53.876 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:53.876 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:53.876 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:53.876 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:53.876 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:53.876 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:53.876 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:53.876 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:53.876 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:53.876 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:53.876 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:53.876 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:53.876 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:53.876 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:53.876 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:53.876 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:54.448 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:05:54.448 10:02:27 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:54.448 10:02:27 -- setup/hugepages.sh@89 -- # local node 00:05:54.448 10:02:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:54.448 10:02:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:54.448 10:02:27 -- setup/hugepages.sh@92 -- # local surp 00:05:54.448 10:02:27 -- setup/hugepages.sh@93 -- # local resv 00:05:54.448 10:02:27 -- setup/hugepages.sh@94 -- # local anon 00:05:54.448 10:02:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:54.448 10:02:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:54.448 10:02:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:54.448 10:02:27 -- setup/common.sh@18 -- # local node= 00:05:54.448 10:02:27 -- setup/common.sh@19 -- # local var val 00:05:54.448 10:02:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:54.448 10:02:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:54.448 10:02:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:54.448 10:02:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:54.448 10:02:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:54.448 10:02:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:54.448 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.448 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 69901184 kB' 'MemAvailable: 73873168 kB' 'Buffers: 2696 kB' 'Cached: 16110640 kB' 'SwapCached: 0 kB' 'Active: 13028072 kB' 'Inactive: 3680104 kB' 'Active(anon): 12403616 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598132 kB' 'Mapped: 193136 kB' 'Shmem: 11808776 kB' 'KReclaimable: 503384 kB' 'Slab: 1158192 kB' 'SReclaimable: 503384 kB' 'SUnreclaim: 654808 kB' 'KernelStack: 22912 kB' 'PageTables: 9240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13843104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220836 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.449 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.449 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.450 10:02:27 -- setup/common.sh@33 -- # echo 0 00:05:54.450 10:02:27 -- setup/common.sh@33 -- # return 0 00:05:54.450 10:02:27 -- setup/hugepages.sh@97 -- # anon=0 00:05:54.450 10:02:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:54.450 10:02:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:54.450 10:02:27 -- setup/common.sh@18 -- # local node= 00:05:54.450 10:02:27 -- setup/common.sh@19 -- # local var val 00:05:54.450 10:02:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:54.450 10:02:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:54.450 10:02:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:54.450 10:02:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:54.450 10:02:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:54.450 10:02:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 69903860 kB' 'MemAvailable: 73875844 kB' 'Buffers: 2696 kB' 'Cached: 16110640 kB' 'SwapCached: 0 kB' 'Active: 13027428 kB' 'Inactive: 3680104 kB' 'Active(anon): 12402972 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597492 kB' 'Mapped: 193084 kB' 'Shmem: 11808776 kB' 'KReclaimable: 503384 kB' 'Slab: 1158140 kB' 'SReclaimable: 503384 kB' 'SUnreclaim: 654756 kB' 'KernelStack: 22800 kB' 'PageTables: 9224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13844632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220804 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.450 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.450 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.451 10:02:27 -- setup/common.sh@33 -- # echo 0 00:05:54.451 10:02:27 -- setup/common.sh@33 -- # return 0 00:05:54.451 10:02:27 -- setup/hugepages.sh@99 -- # surp=0 00:05:54.451 10:02:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:54.451 10:02:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:54.451 10:02:27 -- setup/common.sh@18 -- # local node= 00:05:54.451 10:02:27 -- setup/common.sh@19 -- # local var val 00:05:54.451 10:02:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:54.451 10:02:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:54.451 10:02:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:54.451 10:02:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:54.451 10:02:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:54.451 10:02:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 69902968 kB' 'MemAvailable: 73874952 kB' 'Buffers: 2696 kB' 'Cached: 16110660 kB' 'SwapCached: 0 kB' 'Active: 13028072 kB' 'Inactive: 3680104 kB' 'Active(anon): 12403616 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598000 kB' 'Mapped: 193084 kB' 'Shmem: 11808796 kB' 'KReclaimable: 503384 kB' 'Slab: 1158128 kB' 'SReclaimable: 503384 kB' 'SUnreclaim: 654744 kB' 'KernelStack: 23024 kB' 'PageTables: 9948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13844652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220820 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.451 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.451 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.452 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.452 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.714 10:02:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.714 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.714 10:02:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.714 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.714 10:02:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.714 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.714 10:02:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.714 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.714 10:02:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.714 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.714 10:02:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.714 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.714 10:02:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.714 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.714 10:02:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.714 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.714 10:02:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.714 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.714 10:02:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.714 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.714 10:02:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.714 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.714 10:02:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.714 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.714 10:02:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.714 10:02:27 -- setup/common.sh@33 -- # echo 0 00:05:54.714 10:02:27 -- setup/common.sh@33 -- # return 0 00:05:54.714 10:02:27 -- setup/hugepages.sh@100 -- # resv=0 00:05:54.714 10:02:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:54.714 nr_hugepages=1024 00:05:54.714 10:02:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:54.714 resv_hugepages=0 00:05:54.714 10:02:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:54.714 surplus_hugepages=0 00:05:54.714 10:02:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:54.714 anon_hugepages=0 00:05:54.714 10:02:27 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:54.714 10:02:27 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:54.714 10:02:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:54.714 10:02:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:54.714 10:02:27 -- setup/common.sh@18 -- # local node= 00:05:54.714 10:02:27 -- setup/common.sh@19 -- # local var val 00:05:54.714 10:02:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:54.714 10:02:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:54.714 10:02:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:54.714 10:02:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:54.714 10:02:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:54.714 10:02:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.714 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.714 10:02:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 69900460 kB' 'MemAvailable: 73872444 kB' 'Buffers: 2696 kB' 'Cached: 16110672 kB' 'SwapCached: 0 kB' 'Active: 13027448 kB' 'Inactive: 3680104 kB' 'Active(anon): 12402992 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597348 kB' 'Mapped: 193084 kB' 'Shmem: 11808808 kB' 'KReclaimable: 503384 kB' 'Slab: 1158096 kB' 'SReclaimable: 503384 kB' 'SUnreclaim: 654712 kB' 'KernelStack: 22848 kB' 'PageTables: 9264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13844668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220868 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:05:54.714 10:02:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.714 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.715 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.715 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.716 10:02:27 -- setup/common.sh@33 -- # echo 1024 00:05:54.716 10:02:27 -- setup/common.sh@33 -- # return 0 00:05:54.716 10:02:27 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:54.716 10:02:27 -- setup/hugepages.sh@112 -- # get_nodes 00:05:54.716 10:02:27 -- setup/hugepages.sh@27 -- # local node 00:05:54.716 10:02:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:54.716 10:02:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:54.716 10:02:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:54.716 10:02:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:54.716 10:02:27 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:54.716 10:02:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:54.716 10:02:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:54.716 10:02:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:54.716 10:02:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:54.716 10:02:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:54.716 10:02:27 -- setup/common.sh@18 -- # local node=0 00:05:54.716 10:02:27 -- setup/common.sh@19 -- # local var val 00:05:54.716 10:02:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:54.716 10:02:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:54.716 10:02:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:54.716 10:02:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:54.716 10:02:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:54.716 10:02:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 37790568 kB' 'MemUsed: 10277828 kB' 'SwapCached: 0 kB' 'Active: 6743036 kB' 'Inactive: 362940 kB' 'Active(anon): 6427028 kB' 'Inactive(anon): 0 kB' 'Active(file): 316008 kB' 'Inactive(file): 362940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6611444 kB' 'Mapped: 191524 kB' 'AnonPages: 497676 kB' 'Shmem: 5932496 kB' 'KernelStack: 11208 kB' 'PageTables: 6700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 162016 kB' 'Slab: 439652 kB' 'SReclaimable: 162016 kB' 'SUnreclaim: 277636 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.716 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.716 10:02:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # continue 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:54.717 10:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:54.717 10:02:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.717 10:02:27 -- setup/common.sh@33 -- # echo 0 00:05:54.717 10:02:27 -- setup/common.sh@33 -- # return 0 00:05:54.717 10:02:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:54.717 10:02:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:54.717 10:02:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:54.717 10:02:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:54.717 10:02:27 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:54.717 node0=1024 expecting 1024 00:05:54.717 10:02:27 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:54.717 00:05:54.717 real 0m4.111s 00:05:54.717 user 0m1.324s 00:05:54.717 sys 0m1.998s 00:05:54.717 10:02:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.717 10:02:27 -- common/autotest_common.sh@10 -- # set +x 00:05:54.717 ************************************ 00:05:54.717 END TEST default_setup 00:05:54.717 ************************************ 00:05:54.717 10:02:27 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:54.717 10:02:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:54.717 10:02:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:54.717 10:02:27 -- common/autotest_common.sh@10 -- # set +x 00:05:54.717 ************************************ 00:05:54.717 START TEST per_node_1G_alloc 00:05:54.717 ************************************ 00:05:54.717 10:02:27 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:05:54.717 10:02:27 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:54.717 10:02:27 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:05:54.717 10:02:27 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:54.717 10:02:27 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:05:54.717 10:02:27 -- setup/hugepages.sh@51 -- # shift 00:05:54.717 10:02:27 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:05:54.717 10:02:27 -- setup/hugepages.sh@52 -- # local node_ids 00:05:54.717 10:02:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:54.717 10:02:27 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:54.717 10:02:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:05:54.717 10:02:27 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:05:54.717 10:02:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:54.717 10:02:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:54.717 10:02:27 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:54.717 10:02:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:54.717 10:02:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:54.717 10:02:27 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:05:54.717 10:02:27 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:54.717 10:02:27 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:54.717 10:02:27 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:54.717 10:02:27 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:54.717 10:02:27 -- setup/hugepages.sh@73 -- # return 0 00:05:54.717 10:02:27 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:54.717 10:02:27 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:05:54.717 10:02:27 -- setup/hugepages.sh@146 -- # setup output 00:05:54.717 10:02:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:54.717 10:02:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:58.014 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:58.014 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:58.014 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:58.014 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:58.014 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:58.014 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:58.014 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:58.014 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:58.014 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:58.014 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:58.014 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:58.014 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:58.014 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:58.014 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:58.014 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:58.014 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:58.014 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:58.014 10:02:30 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:58.014 10:02:30 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:58.014 10:02:30 -- setup/hugepages.sh@89 -- # local node 00:05:58.014 10:02:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:58.014 10:02:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:58.014 10:02:30 -- setup/hugepages.sh@92 -- # local surp 00:05:58.014 10:02:30 -- setup/hugepages.sh@93 -- # local resv 00:05:58.014 10:02:30 -- setup/hugepages.sh@94 -- # local anon 00:05:58.014 10:02:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:58.014 10:02:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:58.014 10:02:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:58.014 10:02:30 -- setup/common.sh@18 -- # local node= 00:05:58.014 10:02:30 -- setup/common.sh@19 -- # local var val 00:05:58.014 10:02:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:58.014 10:02:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.014 10:02:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:58.014 10:02:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:58.014 10:02:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.014 10:02:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 10:02:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 69913960 kB' 'MemAvailable: 73885944 kB' 'Buffers: 2696 kB' 'Cached: 16110748 kB' 'SwapCached: 0 kB' 'Active: 13024748 kB' 'Inactive: 3680104 kB' 'Active(anon): 12400292 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 594584 kB' 'Mapped: 191968 kB' 'Shmem: 11808884 kB' 'KReclaimable: 503384 kB' 'Slab: 1158472 kB' 'SReclaimable: 503384 kB' 'SUnreclaim: 655088 kB' 'KernelStack: 22848 kB' 'PageTables: 9432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13830884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 221044 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.014 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.014 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:58.015 10:02:30 -- setup/common.sh@33 -- # echo 0 00:05:58.015 10:02:30 -- setup/common.sh@33 -- # return 0 00:05:58.015 10:02:30 -- setup/hugepages.sh@97 -- # anon=0 00:05:58.015 10:02:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:58.015 10:02:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:58.015 10:02:30 -- setup/common.sh@18 -- # local node= 00:05:58.015 10:02:30 -- setup/common.sh@19 -- # local var val 00:05:58.015 10:02:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:58.015 10:02:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.015 10:02:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:58.015 10:02:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:58.015 10:02:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.015 10:02:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 69917764 kB' 'MemAvailable: 73889748 kB' 'Buffers: 2696 kB' 'Cached: 16110748 kB' 'SwapCached: 0 kB' 'Active: 13024948 kB' 'Inactive: 3680104 kB' 'Active(anon): 12400492 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 594556 kB' 'Mapped: 191940 kB' 'Shmem: 11808884 kB' 'KReclaimable: 503384 kB' 'Slab: 1158476 kB' 'SReclaimable: 503384 kB' 'SUnreclaim: 655092 kB' 'KernelStack: 22848 kB' 'PageTables: 9236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13829432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220980 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.015 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.015 10:02:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.016 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.016 10:02:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.017 10:02:30 -- setup/common.sh@33 -- # echo 0 00:05:58.017 10:02:30 -- setup/common.sh@33 -- # return 0 00:05:58.017 10:02:30 -- setup/hugepages.sh@99 -- # surp=0 00:05:58.017 10:02:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:58.017 10:02:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:58.017 10:02:30 -- setup/common.sh@18 -- # local node= 00:05:58.017 10:02:30 -- setup/common.sh@19 -- # local var val 00:05:58.017 10:02:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:58.017 10:02:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.017 10:02:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:58.017 10:02:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:58.017 10:02:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.017 10:02:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 69917956 kB' 'MemAvailable: 73889940 kB' 'Buffers: 2696 kB' 'Cached: 16110748 kB' 'SwapCached: 0 kB' 'Active: 13024544 kB' 'Inactive: 3680104 kB' 'Active(anon): 12400088 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 594380 kB' 'Mapped: 191940 kB' 'Shmem: 11808884 kB' 'KReclaimable: 503384 kB' 'Slab: 1158476 kB' 'SReclaimable: 503384 kB' 'SUnreclaim: 655092 kB' 'KernelStack: 22912 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13839332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220916 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.017 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.017 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:58.018 10:02:30 -- setup/common.sh@33 -- # echo 0 00:05:58.018 10:02:30 -- setup/common.sh@33 -- # return 0 00:05:58.018 10:02:30 -- setup/hugepages.sh@100 -- # resv=0 00:05:58.018 10:02:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:58.018 nr_hugepages=1024 00:05:58.018 10:02:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:58.018 resv_hugepages=0 00:05:58.018 10:02:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:58.018 surplus_hugepages=0 00:05:58.018 10:02:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:58.018 anon_hugepages=0 00:05:58.018 10:02:30 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:58.018 10:02:30 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:58.018 10:02:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:58.018 10:02:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:58.018 10:02:30 -- setup/common.sh@18 -- # local node= 00:05:58.018 10:02:30 -- setup/common.sh@19 -- # local var val 00:05:58.018 10:02:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:58.018 10:02:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.018 10:02:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:58.018 10:02:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:58.018 10:02:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.018 10:02:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 10:02:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 69918240 kB' 'MemAvailable: 73890224 kB' 'Buffers: 2696 kB' 'Cached: 16110780 kB' 'SwapCached: 0 kB' 'Active: 13023412 kB' 'Inactive: 3680104 kB' 'Active(anon): 12398956 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 593276 kB' 'Mapped: 191940 kB' 'Shmem: 11808916 kB' 'KReclaimable: 503384 kB' 'Slab: 1158428 kB' 'SReclaimable: 503384 kB' 'SUnreclaim: 655044 kB' 'KernelStack: 22720 kB' 'PageTables: 8896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13826012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220772 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.018 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.018 10:02:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.019 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.019 10:02:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:58.020 10:02:30 -- setup/common.sh@33 -- # echo 1024 00:05:58.020 10:02:30 -- setup/common.sh@33 -- # return 0 00:05:58.020 10:02:30 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:58.020 10:02:30 -- setup/hugepages.sh@112 -- # get_nodes 00:05:58.020 10:02:30 -- setup/hugepages.sh@27 -- # local node 00:05:58.020 10:02:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:58.020 10:02:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:58.020 10:02:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:58.020 10:02:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:58.020 10:02:30 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:58.020 10:02:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:58.020 10:02:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:58.020 10:02:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:58.020 10:02:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:58.020 10:02:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:58.020 10:02:30 -- setup/common.sh@18 -- # local node=0 00:05:58.020 10:02:30 -- setup/common.sh@19 -- # local var val 00:05:58.020 10:02:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:58.020 10:02:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.020 10:02:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:58.020 10:02:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:58.020 10:02:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.020 10:02:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 38849628 kB' 'MemUsed: 9218768 kB' 'SwapCached: 0 kB' 'Active: 6741664 kB' 'Inactive: 362940 kB' 'Active(anon): 6425656 kB' 'Inactive(anon): 0 kB' 'Active(file): 316008 kB' 'Inactive(file): 362940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6611552 kB' 'Mapped: 190380 kB' 'AnonPages: 496268 kB' 'Shmem: 5932604 kB' 'KernelStack: 11160 kB' 'PageTables: 6612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 162016 kB' 'Slab: 439792 kB' 'SReclaimable: 162016 kB' 'SUnreclaim: 277776 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.020 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.020 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:30 -- setup/common.sh@33 -- # echo 0 00:05:58.021 10:02:30 -- setup/common.sh@33 -- # return 0 00:05:58.021 10:02:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:58.021 10:02:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:58.021 10:02:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:58.021 10:02:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:58.021 10:02:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:58.021 10:02:30 -- setup/common.sh@18 -- # local node=1 00:05:58.021 10:02:30 -- setup/common.sh@19 -- # local var val 00:05:58.021 10:02:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:58.021 10:02:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.021 10:02:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:58.021 10:02:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:58.021 10:02:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.021 10:02:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.021 10:02:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218204 kB' 'MemFree: 31069312 kB' 'MemUsed: 13148892 kB' 'SwapCached: 0 kB' 'Active: 6281716 kB' 'Inactive: 3317164 kB' 'Active(anon): 5973268 kB' 'Inactive(anon): 0 kB' 'Active(file): 308448 kB' 'Inactive(file): 3317164 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9501948 kB' 'Mapped: 1560 kB' 'AnonPages: 96996 kB' 'Shmem: 5876336 kB' 'KernelStack: 11528 kB' 'PageTables: 2244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 341368 kB' 'Slab: 718628 kB' 'SReclaimable: 341368 kB' 'SUnreclaim: 377260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:30 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.021 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.021 10:02:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # continue 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:58.022 10:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:58.022 10:02:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:58.022 10:02:31 -- setup/common.sh@33 -- # echo 0 00:05:58.022 10:02:31 -- setup/common.sh@33 -- # return 0 00:05:58.022 10:02:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:58.022 10:02:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:58.022 10:02:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:58.022 10:02:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:58.022 10:02:31 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:58.022 node0=512 expecting 512 00:05:58.022 10:02:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:58.022 10:02:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:58.022 10:02:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:58.022 10:02:31 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:58.022 node1=512 expecting 512 00:05:58.022 10:02:31 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:58.022 00:05:58.022 real 0m3.122s 00:05:58.022 user 0m1.253s 00:05:58.022 sys 0m1.914s 00:05:58.022 10:02:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.022 10:02:31 -- common/autotest_common.sh@10 -- # set +x 00:05:58.022 ************************************ 00:05:58.022 END TEST per_node_1G_alloc 00:05:58.022 ************************************ 00:05:58.022 10:02:31 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:58.022 10:02:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:58.022 10:02:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.022 10:02:31 -- common/autotest_common.sh@10 -- # set +x 00:05:58.022 ************************************ 00:05:58.022 START TEST even_2G_alloc 00:05:58.022 ************************************ 00:05:58.022 10:02:31 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:05:58.022 10:02:31 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:58.022 10:02:31 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:58.022 10:02:31 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:58.022 10:02:31 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:58.022 10:02:31 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:58.022 10:02:31 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:58.022 10:02:31 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:58.022 10:02:31 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:58.022 10:02:31 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:58.022 10:02:31 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:58.022 10:02:31 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:58.022 10:02:31 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:58.022 10:02:31 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:58.022 10:02:31 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:58.022 10:02:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:58.022 10:02:31 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:58.022 10:02:31 -- setup/hugepages.sh@83 -- # : 512 00:05:58.022 10:02:31 -- setup/hugepages.sh@84 -- # : 1 00:05:58.022 10:02:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:58.022 10:02:31 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:58.022 10:02:31 -- setup/hugepages.sh@83 -- # : 0 00:05:58.022 10:02:31 -- setup/hugepages.sh@84 -- # : 0 00:05:58.022 10:02:31 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:58.022 10:02:31 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:58.022 10:02:31 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:58.022 10:02:31 -- setup/hugepages.sh@153 -- # setup output 00:05:58.022 10:02:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:58.022 10:02:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:00.562 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:00.562 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:00.562 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:00.562 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:00.562 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:00.562 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:00.562 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:00.562 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:00.562 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:00.562 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:00.562 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:00.562 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:00.562 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:00.562 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:00.562 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:00.562 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:00.562 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:00.562 10:02:33 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:06:00.562 10:02:33 -- setup/hugepages.sh@89 -- # local node 00:06:00.562 10:02:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:00.562 10:02:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:00.562 10:02:33 -- setup/hugepages.sh@92 -- # local surp 00:06:00.562 10:02:33 -- setup/hugepages.sh@93 -- # local resv 00:06:00.562 10:02:33 -- setup/hugepages.sh@94 -- # local anon 00:06:00.562 10:02:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:00.562 10:02:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:00.562 10:02:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:00.562 10:02:33 -- setup/common.sh@18 -- # local node= 00:06:00.562 10:02:33 -- setup/common.sh@19 -- # local var val 00:06:00.562 10:02:33 -- setup/common.sh@20 -- # local mem_f mem 00:06:00.562 10:02:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:00.562 10:02:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:00.562 10:02:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:00.562 10:02:33 -- setup/common.sh@28 -- # mapfile -t mem 00:06:00.562 10:02:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.562 10:02:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 69930684 kB' 'MemAvailable: 73902668 kB' 'Buffers: 2696 kB' 'Cached: 16110876 kB' 'SwapCached: 0 kB' 'Active: 13025268 kB' 'Inactive: 3680104 kB' 'Active(anon): 12400812 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 594524 kB' 'Mapped: 192028 kB' 'Shmem: 11809012 kB' 'KReclaimable: 503384 kB' 'Slab: 1159456 kB' 'SReclaimable: 503384 kB' 'SUnreclaim: 656072 kB' 'KernelStack: 22720 kB' 'PageTables: 8988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13826988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220900 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.562 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.562 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.563 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.563 10:02:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.563 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.563 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.563 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.563 10:02:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.563 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.563 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.563 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.563 10:02:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.563 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.563 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.563 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.563 10:02:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.563 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.563 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.563 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.563 10:02:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.563 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.563 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.563 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.563 10:02:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.563 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.563 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.563 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.563 10:02:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.563 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.563 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.828 10:02:33 -- setup/common.sh@33 -- # echo 0 00:06:00.828 10:02:33 -- setup/common.sh@33 -- # return 0 00:06:00.828 10:02:33 -- setup/hugepages.sh@97 -- # anon=0 00:06:00.828 10:02:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:00.828 10:02:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:00.828 10:02:33 -- setup/common.sh@18 -- # local node= 00:06:00.828 10:02:33 -- setup/common.sh@19 -- # local var val 00:06:00.828 10:02:33 -- setup/common.sh@20 -- # local mem_f mem 00:06:00.828 10:02:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:00.828 10:02:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:00.828 10:02:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:00.828 10:02:33 -- setup/common.sh@28 -- # mapfile -t mem 00:06:00.828 10:02:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.828 10:02:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 69933340 kB' 'MemAvailable: 73905324 kB' 'Buffers: 2696 kB' 'Cached: 16110880 kB' 'SwapCached: 0 kB' 'Active: 13024928 kB' 'Inactive: 3680104 kB' 'Active(anon): 12400472 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 594248 kB' 'Mapped: 192028 kB' 'Shmem: 11809016 kB' 'KReclaimable: 503384 kB' 'Slab: 1159412 kB' 'SReclaimable: 503384 kB' 'SUnreclaim: 656028 kB' 'KernelStack: 22704 kB' 'PageTables: 8916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13827000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220852 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.828 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.828 10:02:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.829 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.829 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.830 10:02:33 -- setup/common.sh@33 -- # echo 0 00:06:00.830 10:02:33 -- setup/common.sh@33 -- # return 0 00:06:00.830 10:02:33 -- setup/hugepages.sh@99 -- # surp=0 00:06:00.830 10:02:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:00.830 10:02:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:00.830 10:02:33 -- setup/common.sh@18 -- # local node= 00:06:00.830 10:02:33 -- setup/common.sh@19 -- # local var val 00:06:00.830 10:02:33 -- setup/common.sh@20 -- # local mem_f mem 00:06:00.830 10:02:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:00.830 10:02:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:00.830 10:02:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:00.830 10:02:33 -- setup/common.sh@28 -- # mapfile -t mem 00:06:00.830 10:02:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 69929932 kB' 'MemAvailable: 73901916 kB' 'Buffers: 2696 kB' 'Cached: 16110892 kB' 'SwapCached: 0 kB' 'Active: 13026668 kB' 'Inactive: 3680104 kB' 'Active(anon): 12402212 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 596432 kB' 'Mapped: 192452 kB' 'Shmem: 11809028 kB' 'KReclaimable: 503384 kB' 'Slab: 1159372 kB' 'SReclaimable: 503384 kB' 'SUnreclaim: 655988 kB' 'KernelStack: 22704 kB' 'PageTables: 8912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13830352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220820 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.830 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.830 10:02:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.831 10:02:33 -- setup/common.sh@33 -- # echo 0 00:06:00.831 10:02:33 -- setup/common.sh@33 -- # return 0 00:06:00.831 10:02:33 -- setup/hugepages.sh@100 -- # resv=0 00:06:00.831 10:02:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:00.831 nr_hugepages=1024 00:06:00.831 10:02:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:00.831 resv_hugepages=0 00:06:00.831 10:02:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:00.831 surplus_hugepages=0 00:06:00.831 10:02:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:00.831 anon_hugepages=0 00:06:00.831 10:02:33 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:00.831 10:02:33 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:00.831 10:02:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:00.831 10:02:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:00.831 10:02:33 -- setup/common.sh@18 -- # local node= 00:06:00.831 10:02:33 -- setup/common.sh@19 -- # local var val 00:06:00.831 10:02:33 -- setup/common.sh@20 -- # local mem_f mem 00:06:00.831 10:02:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:00.831 10:02:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:00.831 10:02:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:00.831 10:02:33 -- setup/common.sh@28 -- # mapfile -t mem 00:06:00.831 10:02:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.831 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.831 10:02:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 69926216 kB' 'MemAvailable: 73898200 kB' 'Buffers: 2696 kB' 'Cached: 16110916 kB' 'SwapCached: 0 kB' 'Active: 13029304 kB' 'Inactive: 3680104 kB' 'Active(anon): 12404848 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599080 kB' 'Mapped: 192668 kB' 'Shmem: 11809052 kB' 'KReclaimable: 503384 kB' 'Slab: 1159372 kB' 'SReclaimable: 503384 kB' 'SUnreclaim: 655988 kB' 'KernelStack: 22672 kB' 'PageTables: 8836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13833148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220824 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:33 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.832 10:02:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.832 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.832 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.832 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.833 10:02:34 -- setup/common.sh@33 -- # echo 1024 00:06:00.833 10:02:34 -- setup/common.sh@33 -- # return 0 00:06:00.833 10:02:34 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:00.833 10:02:34 -- setup/hugepages.sh@112 -- # get_nodes 00:06:00.833 10:02:34 -- setup/hugepages.sh@27 -- # local node 00:06:00.833 10:02:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:00.833 10:02:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:00.833 10:02:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:00.833 10:02:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:00.833 10:02:34 -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:00.833 10:02:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:00.833 10:02:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:00.833 10:02:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:00.833 10:02:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:00.833 10:02:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:00.833 10:02:34 -- setup/common.sh@18 -- # local node=0 00:06:00.833 10:02:34 -- setup/common.sh@19 -- # local var val 00:06:00.833 10:02:34 -- setup/common.sh@20 -- # local mem_f mem 00:06:00.833 10:02:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:00.833 10:02:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:00.833 10:02:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:00.833 10:02:34 -- setup/common.sh@28 -- # mapfile -t mem 00:06:00.833 10:02:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.833 10:02:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 38858244 kB' 'MemUsed: 9210152 kB' 'SwapCached: 0 kB' 'Active: 6741576 kB' 'Inactive: 362940 kB' 'Active(anon): 6425568 kB' 'Inactive(anon): 0 kB' 'Active(file): 316008 kB' 'Inactive(file): 362940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6611632 kB' 'Mapped: 190388 kB' 'AnonPages: 496056 kB' 'Shmem: 5932684 kB' 'KernelStack: 11176 kB' 'PageTables: 6616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 162016 kB' 'Slab: 440476 kB' 'SReclaimable: 162016 kB' 'SUnreclaim: 278460 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.833 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.833 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@33 -- # echo 0 00:06:00.834 10:02:34 -- setup/common.sh@33 -- # return 0 00:06:00.834 10:02:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:00.834 10:02:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:00.834 10:02:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:00.834 10:02:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:00.834 10:02:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:00.834 10:02:34 -- setup/common.sh@18 -- # local node=1 00:06:00.834 10:02:34 -- setup/common.sh@19 -- # local var val 00:06:00.834 10:02:34 -- setup/common.sh@20 -- # local mem_f mem 00:06:00.834 10:02:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:00.834 10:02:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:00.834 10:02:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:00.834 10:02:34 -- setup/common.sh@28 -- # mapfile -t mem 00:06:00.834 10:02:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218204 kB' 'MemFree: 31068284 kB' 'MemUsed: 13149920 kB' 'SwapCached: 0 kB' 'Active: 6282560 kB' 'Inactive: 3317164 kB' 'Active(anon): 5974112 kB' 'Inactive(anon): 0 kB' 'Active(file): 308448 kB' 'Inactive(file): 3317164 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9501984 kB' 'Mapped: 1560 kB' 'AnonPages: 97840 kB' 'Shmem: 5876372 kB' 'KernelStack: 11528 kB' 'PageTables: 2296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 341368 kB' 'Slab: 718896 kB' 'SReclaimable: 341368 kB' 'SUnreclaim: 377528 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.834 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.834 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # continue 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # IFS=': ' 00:06:00.835 10:02:34 -- setup/common.sh@31 -- # read -r var val _ 00:06:00.835 10:02:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.835 10:02:34 -- setup/common.sh@33 -- # echo 0 00:06:00.835 10:02:34 -- setup/common.sh@33 -- # return 0 00:06:00.835 10:02:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:00.835 10:02:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:00.835 10:02:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:00.835 10:02:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:00.835 10:02:34 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:00.835 node0=512 expecting 512 00:06:00.835 10:02:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:00.835 10:02:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:00.835 10:02:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:00.835 10:02:34 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:06:00.835 node1=512 expecting 512 00:06:00.835 10:02:34 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:00.835 00:06:00.835 real 0m3.008s 00:06:00.835 user 0m1.182s 00:06:00.835 sys 0m1.861s 00:06:00.835 10:02:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.835 10:02:34 -- common/autotest_common.sh@10 -- # set +x 00:06:00.835 ************************************ 00:06:00.835 END TEST even_2G_alloc 00:06:00.835 ************************************ 00:06:00.835 10:02:34 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:06:00.835 10:02:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:00.835 10:02:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.835 10:02:34 -- common/autotest_common.sh@10 -- # set +x 00:06:00.835 ************************************ 00:06:00.835 START TEST odd_alloc 00:06:00.835 ************************************ 00:06:00.835 10:02:34 -- common/autotest_common.sh@1104 -- # odd_alloc 00:06:00.835 10:02:34 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:06:00.835 10:02:34 -- setup/hugepages.sh@49 -- # local size=2098176 00:06:00.835 10:02:34 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:00.835 10:02:34 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:00.835 10:02:34 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:06:00.835 10:02:34 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:00.835 10:02:34 -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:00.835 10:02:34 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:00.835 10:02:34 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:06:00.835 10:02:34 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:00.835 10:02:34 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:00.835 10:02:34 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:00.835 10:02:34 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:00.835 10:02:34 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:00.835 10:02:34 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:00.835 10:02:34 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:00.835 10:02:34 -- setup/hugepages.sh@83 -- # : 513 00:06:00.835 10:02:34 -- setup/hugepages.sh@84 -- # : 1 00:06:00.835 10:02:34 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:00.835 10:02:34 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:06:00.835 10:02:34 -- setup/hugepages.sh@83 -- # : 0 00:06:00.835 10:02:34 -- setup/hugepages.sh@84 -- # : 0 00:06:00.835 10:02:34 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:00.835 10:02:34 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:06:00.835 10:02:34 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:06:00.835 10:02:34 -- setup/hugepages.sh@160 -- # setup output 00:06:00.835 10:02:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:00.835 10:02:34 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:04.137 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:04.137 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:04.137 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:04.137 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:04.137 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:04.137 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:04.137 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:04.137 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:04.137 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:04.137 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:04.137 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:04.137 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:04.137 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:04.137 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:04.137 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:04.137 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:04.137 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:04.137 10:02:37 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:06:04.137 10:02:37 -- setup/hugepages.sh@89 -- # local node 00:06:04.137 10:02:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:04.137 10:02:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:04.137 10:02:37 -- setup/hugepages.sh@92 -- # local surp 00:06:04.137 10:02:37 -- setup/hugepages.sh@93 -- # local resv 00:06:04.137 10:02:37 -- setup/hugepages.sh@94 -- # local anon 00:06:04.137 10:02:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:04.137 10:02:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:04.137 10:02:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:04.137 10:02:37 -- setup/common.sh@18 -- # local node= 00:06:04.137 10:02:37 -- setup/common.sh@19 -- # local var val 00:06:04.137 10:02:37 -- setup/common.sh@20 -- # local mem_f mem 00:06:04.137 10:02:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.137 10:02:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:04.137 10:02:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:04.137 10:02:37 -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.137 10:02:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.137 10:02:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 69955308 kB' 'MemAvailable: 73927292 kB' 'Buffers: 2696 kB' 'Cached: 16111008 kB' 'SwapCached: 0 kB' 'Active: 13026056 kB' 'Inactive: 3680104 kB' 'Active(anon): 12401600 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 595708 kB' 'Mapped: 191956 kB' 'Shmem: 11809144 kB' 'KReclaimable: 503384 kB' 'Slab: 1158764 kB' 'SReclaimable: 503384 kB' 'SUnreclaim: 655380 kB' 'KernelStack: 22720 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 13827644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220852 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.137 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.137 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.138 10:02:37 -- setup/common.sh@33 -- # echo 0 00:06:04.138 10:02:37 -- setup/common.sh@33 -- # return 0 00:06:04.138 10:02:37 -- setup/hugepages.sh@97 -- # anon=0 00:06:04.138 10:02:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:04.138 10:02:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:04.138 10:02:37 -- setup/common.sh@18 -- # local node= 00:06:04.138 10:02:37 -- setup/common.sh@19 -- # local var val 00:06:04.138 10:02:37 -- setup/common.sh@20 -- # local mem_f mem 00:06:04.138 10:02:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.138 10:02:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:04.138 10:02:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:04.138 10:02:37 -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.138 10:02:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 69955904 kB' 'MemAvailable: 73927888 kB' 'Buffers: 2696 kB' 'Cached: 16111012 kB' 'SwapCached: 0 kB' 'Active: 13025476 kB' 'Inactive: 3680104 kB' 'Active(anon): 12401020 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 595124 kB' 'Mapped: 191956 kB' 'Shmem: 11809148 kB' 'KReclaimable: 503384 kB' 'Slab: 1158760 kB' 'SReclaimable: 503384 kB' 'SUnreclaim: 655376 kB' 'KernelStack: 22704 kB' 'PageTables: 8908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 13827656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220820 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.138 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.138 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.139 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.139 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.140 10:02:37 -- setup/common.sh@33 -- # echo 0 00:06:04.140 10:02:37 -- setup/common.sh@33 -- # return 0 00:06:04.140 10:02:37 -- setup/hugepages.sh@99 -- # surp=0 00:06:04.140 10:02:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:04.140 10:02:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:04.140 10:02:37 -- setup/common.sh@18 -- # local node= 00:06:04.140 10:02:37 -- setup/common.sh@19 -- # local var val 00:06:04.140 10:02:37 -- setup/common.sh@20 -- # local mem_f mem 00:06:04.140 10:02:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.140 10:02:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:04.140 10:02:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:04.140 10:02:37 -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.140 10:02:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 69955904 kB' 'MemAvailable: 73927888 kB' 'Buffers: 2696 kB' 'Cached: 16111016 kB' 'SwapCached: 0 kB' 'Active: 13025168 kB' 'Inactive: 3680104 kB' 'Active(anon): 12400712 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 594816 kB' 'Mapped: 191956 kB' 'Shmem: 11809152 kB' 'KReclaimable: 503384 kB' 'Slab: 1158760 kB' 'SReclaimable: 503384 kB' 'SUnreclaim: 655376 kB' 'KernelStack: 22704 kB' 'PageTables: 8908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 13827672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220836 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.140 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.140 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.141 10:02:37 -- setup/common.sh@33 -- # echo 0 00:06:04.141 10:02:37 -- setup/common.sh@33 -- # return 0 00:06:04.141 10:02:37 -- setup/hugepages.sh@100 -- # resv=0 00:06:04.141 10:02:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:06:04.141 nr_hugepages=1025 00:06:04.141 10:02:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:04.141 resv_hugepages=0 00:06:04.141 10:02:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:04.141 surplus_hugepages=0 00:06:04.141 10:02:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:04.141 anon_hugepages=0 00:06:04.141 10:02:37 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:04.141 10:02:37 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:06:04.141 10:02:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:04.141 10:02:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:04.141 10:02:37 -- setup/common.sh@18 -- # local node= 00:06:04.141 10:02:37 -- setup/common.sh@19 -- # local var val 00:06:04.141 10:02:37 -- setup/common.sh@20 -- # local mem_f mem 00:06:04.141 10:02:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.141 10:02:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:04.141 10:02:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:04.141 10:02:37 -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.141 10:02:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.141 10:02:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 69956084 kB' 'MemAvailable: 73928068 kB' 'Buffers: 2696 kB' 'Cached: 16111036 kB' 'SwapCached: 0 kB' 'Active: 13025944 kB' 'Inactive: 3680104 kB' 'Active(anon): 12401488 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 595600 kB' 'Mapped: 191956 kB' 'Shmem: 11809172 kB' 'KReclaimable: 503384 kB' 'Slab: 1158760 kB' 'SReclaimable: 503384 kB' 'SUnreclaim: 655376 kB' 'KernelStack: 22736 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 13828576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220788 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.141 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.141 10:02:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.142 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.142 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.143 10:02:37 -- setup/common.sh@33 -- # echo 1025 00:06:04.143 10:02:37 -- setup/common.sh@33 -- # return 0 00:06:04.143 10:02:37 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:04.143 10:02:37 -- setup/hugepages.sh@112 -- # get_nodes 00:06:04.143 10:02:37 -- setup/hugepages.sh@27 -- # local node 00:06:04.143 10:02:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:04.143 10:02:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:04.143 10:02:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:04.143 10:02:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:06:04.143 10:02:37 -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:04.143 10:02:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:04.143 10:02:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:04.143 10:02:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:04.143 10:02:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:04.143 10:02:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:04.143 10:02:37 -- setup/common.sh@18 -- # local node=0 00:06:04.143 10:02:37 -- setup/common.sh@19 -- # local var val 00:06:04.143 10:02:37 -- setup/common.sh@20 -- # local mem_f mem 00:06:04.143 10:02:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.143 10:02:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:04.143 10:02:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:04.143 10:02:37 -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.143 10:02:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.143 10:02:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 38882948 kB' 'MemUsed: 9185448 kB' 'SwapCached: 0 kB' 'Active: 6746408 kB' 'Inactive: 362940 kB' 'Active(anon): 6430400 kB' 'Inactive(anon): 0 kB' 'Active(file): 316008 kB' 'Inactive(file): 362940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6611720 kB' 'Mapped: 190900 kB' 'AnonPages: 500804 kB' 'Shmem: 5932772 kB' 'KernelStack: 11160 kB' 'PageTables: 6580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 162016 kB' 'Slab: 440104 kB' 'SReclaimable: 162016 kB' 'SUnreclaim: 278088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.143 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.143 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@33 -- # echo 0 00:06:04.144 10:02:37 -- setup/common.sh@33 -- # return 0 00:06:04.144 10:02:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:04.144 10:02:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:04.144 10:02:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:04.144 10:02:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:04.144 10:02:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:04.144 10:02:37 -- setup/common.sh@18 -- # local node=1 00:06:04.144 10:02:37 -- setup/common.sh@19 -- # local var val 00:06:04.144 10:02:37 -- setup/common.sh@20 -- # local mem_f mem 00:06:04.144 10:02:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.144 10:02:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:04.144 10:02:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:04.144 10:02:37 -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.144 10:02:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218204 kB' 'MemFree: 31070056 kB' 'MemUsed: 13148148 kB' 'SwapCached: 0 kB' 'Active: 6283104 kB' 'Inactive: 3317164 kB' 'Active(anon): 5974656 kB' 'Inactive(anon): 0 kB' 'Active(file): 308448 kB' 'Inactive(file): 3317164 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9502028 kB' 'Mapped: 1560 kB' 'AnonPages: 98356 kB' 'Shmem: 5876416 kB' 'KernelStack: 11528 kB' 'PageTables: 2288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 341368 kB' 'Slab: 718656 kB' 'SReclaimable: 341368 kB' 'SUnreclaim: 377288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.144 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.144 10:02:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # continue 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # IFS=': ' 00:06:04.145 10:02:37 -- setup/common.sh@31 -- # read -r var val _ 00:06:04.145 10:02:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.145 10:02:37 -- setup/common.sh@33 -- # echo 0 00:06:04.145 10:02:37 -- setup/common.sh@33 -- # return 0 00:06:04.145 10:02:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:04.145 10:02:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:04.145 10:02:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:04.145 10:02:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:04.145 10:02:37 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:06:04.145 node0=512 expecting 513 00:06:04.145 10:02:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:04.145 10:02:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:04.145 10:02:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:04.145 10:02:37 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:06:04.145 node1=513 expecting 512 00:06:04.145 10:02:37 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:06:04.145 00:06:04.145 real 0m3.144s 00:06:04.145 user 0m1.263s 00:06:04.145 sys 0m1.924s 00:06:04.145 10:02:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.145 10:02:37 -- common/autotest_common.sh@10 -- # set +x 00:06:04.145 ************************************ 00:06:04.145 END TEST odd_alloc 00:06:04.145 ************************************ 00:06:04.145 10:02:37 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:06:04.145 10:02:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:04.145 10:02:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:04.145 10:02:37 -- common/autotest_common.sh@10 -- # set +x 00:06:04.145 ************************************ 00:06:04.145 START TEST custom_alloc 00:06:04.145 ************************************ 00:06:04.145 10:02:37 -- common/autotest_common.sh@1104 -- # custom_alloc 00:06:04.145 10:02:37 -- setup/hugepages.sh@167 -- # local IFS=, 00:06:04.145 10:02:37 -- setup/hugepages.sh@169 -- # local node 00:06:04.145 10:02:37 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:06:04.145 10:02:37 -- setup/hugepages.sh@170 -- # local nodes_hp 00:06:04.145 10:02:37 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:06:04.145 10:02:37 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:06:04.145 10:02:37 -- setup/hugepages.sh@49 -- # local size=1048576 00:06:04.145 10:02:37 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:04.146 10:02:37 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:04.146 10:02:37 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:04.146 10:02:37 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:04.146 10:02:37 -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:04.146 10:02:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:04.146 10:02:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:04.146 10:02:37 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:04.146 10:02:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:04.146 10:02:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:04.146 10:02:37 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:04.146 10:02:37 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:04.146 10:02:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:04.146 10:02:37 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:06:04.146 10:02:37 -- setup/hugepages.sh@83 -- # : 256 00:06:04.146 10:02:37 -- setup/hugepages.sh@84 -- # : 1 00:06:04.146 10:02:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:04.146 10:02:37 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:06:04.146 10:02:37 -- setup/hugepages.sh@83 -- # : 0 00:06:04.146 10:02:37 -- setup/hugepages.sh@84 -- # : 0 00:06:04.146 10:02:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:04.146 10:02:37 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:06:04.146 10:02:37 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:06:04.146 10:02:37 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:06:04.146 10:02:37 -- setup/hugepages.sh@49 -- # local size=2097152 00:06:04.146 10:02:37 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:04.146 10:02:37 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:04.146 10:02:37 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:04.146 10:02:37 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:04.146 10:02:37 -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:04.146 10:02:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:04.146 10:02:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:04.146 10:02:37 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:04.146 10:02:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:04.146 10:02:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:04.146 10:02:37 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:04.146 10:02:37 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:06:04.146 10:02:37 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:04.146 10:02:37 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:04.146 10:02:37 -- setup/hugepages.sh@78 -- # return 0 00:06:04.146 10:02:37 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:06:04.146 10:02:37 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:04.146 10:02:37 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:04.146 10:02:37 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:04.146 10:02:37 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:04.146 10:02:37 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:04.146 10:02:37 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:04.146 10:02:37 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:06:04.146 10:02:37 -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:04.146 10:02:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:04.146 10:02:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:04.146 10:02:37 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:04.146 10:02:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:04.146 10:02:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:04.146 10:02:37 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:04.146 10:02:37 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:06:04.146 10:02:37 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:04.146 10:02:37 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:04.146 10:02:37 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:04.146 10:02:37 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:06:04.146 10:02:37 -- setup/hugepages.sh@78 -- # return 0 00:06:04.146 10:02:37 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:06:04.146 10:02:37 -- setup/hugepages.sh@187 -- # setup output 00:06:04.146 10:02:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:04.146 10:02:37 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:07.468 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:07.468 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:07.468 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:07.468 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:07.468 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:07.468 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:07.468 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:07.468 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:07.468 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:07.468 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:07.468 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:07.468 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:07.468 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:07.468 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:07.468 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:07.468 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:07.468 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:07.468 10:02:40 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:06:07.468 10:02:40 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:06:07.468 10:02:40 -- setup/hugepages.sh@89 -- # local node 00:06:07.468 10:02:40 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:07.468 10:02:40 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:07.468 10:02:40 -- setup/hugepages.sh@92 -- # local surp 00:06:07.468 10:02:40 -- setup/hugepages.sh@93 -- # local resv 00:06:07.468 10:02:40 -- setup/hugepages.sh@94 -- # local anon 00:06:07.468 10:02:40 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:07.468 10:02:40 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:07.468 10:02:40 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:07.468 10:02:40 -- setup/common.sh@18 -- # local node= 00:06:07.468 10:02:40 -- setup/common.sh@19 -- # local var val 00:06:07.468 10:02:40 -- setup/common.sh@20 -- # local mem_f mem 00:06:07.468 10:02:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:07.468 10:02:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:07.468 10:02:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:07.468 10:02:40 -- setup/common.sh@28 -- # mapfile -t mem 00:06:07.468 10:02:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:07.468 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.468 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 68918328 kB' 'MemAvailable: 72890312 kB' 'Buffers: 2696 kB' 'Cached: 16111132 kB' 'SwapCached: 0 kB' 'Active: 13028348 kB' 'Inactive: 3680104 kB' 'Active(anon): 12403892 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598188 kB' 'Mapped: 191992 kB' 'Shmem: 11809268 kB' 'KReclaimable: 503384 kB' 'Slab: 1158856 kB' 'SReclaimable: 503384 kB' 'SUnreclaim: 655472 kB' 'KernelStack: 22704 kB' 'PageTables: 8912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 13828520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220788 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.469 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.469 10:02:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.470 10:02:40 -- setup/common.sh@33 -- # echo 0 00:06:07.470 10:02:40 -- setup/common.sh@33 -- # return 0 00:06:07.470 10:02:40 -- setup/hugepages.sh@97 -- # anon=0 00:06:07.470 10:02:40 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:07.470 10:02:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:07.470 10:02:40 -- setup/common.sh@18 -- # local node= 00:06:07.470 10:02:40 -- setup/common.sh@19 -- # local var val 00:06:07.470 10:02:40 -- setup/common.sh@20 -- # local mem_f mem 00:06:07.470 10:02:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:07.470 10:02:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:07.470 10:02:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:07.470 10:02:40 -- setup/common.sh@28 -- # mapfile -t mem 00:06:07.470 10:02:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 68920448 kB' 'MemAvailable: 72892432 kB' 'Buffers: 2696 kB' 'Cached: 16111136 kB' 'SwapCached: 0 kB' 'Active: 13028436 kB' 'Inactive: 3680104 kB' 'Active(anon): 12403980 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597800 kB' 'Mapped: 192068 kB' 'Shmem: 11809272 kB' 'KReclaimable: 503384 kB' 'Slab: 1158860 kB' 'SReclaimable: 503384 kB' 'SUnreclaim: 655476 kB' 'KernelStack: 22608 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 13827780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220708 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.470 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.470 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.471 10:02:40 -- setup/common.sh@33 -- # echo 0 00:06:07.471 10:02:40 -- setup/common.sh@33 -- # return 0 00:06:07.471 10:02:40 -- setup/hugepages.sh@99 -- # surp=0 00:06:07.471 10:02:40 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:07.471 10:02:40 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:07.471 10:02:40 -- setup/common.sh@18 -- # local node= 00:06:07.471 10:02:40 -- setup/common.sh@19 -- # local var val 00:06:07.471 10:02:40 -- setup/common.sh@20 -- # local mem_f mem 00:06:07.471 10:02:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:07.471 10:02:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:07.471 10:02:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:07.471 10:02:40 -- setup/common.sh@28 -- # mapfile -t mem 00:06:07.471 10:02:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 68920112 kB' 'MemAvailable: 72892096 kB' 'Buffers: 2696 kB' 'Cached: 16111152 kB' 'SwapCached: 0 kB' 'Active: 13027412 kB' 'Inactive: 3680104 kB' 'Active(anon): 12402956 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597200 kB' 'Mapped: 191960 kB' 'Shmem: 11809288 kB' 'KReclaimable: 503384 kB' 'Slab: 1158868 kB' 'SReclaimable: 503384 kB' 'SUnreclaim: 655484 kB' 'KernelStack: 22640 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 13827932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220708 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.471 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.471 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.472 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.472 10:02:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.472 10:02:40 -- setup/common.sh@33 -- # echo 0 00:06:07.472 10:02:40 -- setup/common.sh@33 -- # return 0 00:06:07.472 10:02:40 -- setup/hugepages.sh@100 -- # resv=0 00:06:07.472 10:02:40 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:06:07.472 nr_hugepages=1536 00:06:07.472 10:02:40 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:07.472 resv_hugepages=0 00:06:07.472 10:02:40 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:07.472 surplus_hugepages=0 00:06:07.472 10:02:40 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:07.472 anon_hugepages=0 00:06:07.472 10:02:40 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:06:07.472 10:02:40 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:06:07.472 10:02:40 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:07.472 10:02:40 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:07.472 10:02:40 -- setup/common.sh@18 -- # local node= 00:06:07.473 10:02:40 -- setup/common.sh@19 -- # local var val 00:06:07.473 10:02:40 -- setup/common.sh@20 -- # local mem_f mem 00:06:07.473 10:02:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:07.473 10:02:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:07.473 10:02:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:07.473 10:02:40 -- setup/common.sh@28 -- # mapfile -t mem 00:06:07.473 10:02:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.473 10:02:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 68920616 kB' 'MemAvailable: 72892600 kB' 'Buffers: 2696 kB' 'Cached: 16111168 kB' 'SwapCached: 0 kB' 'Active: 13027748 kB' 'Inactive: 3680104 kB' 'Active(anon): 12403292 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597532 kB' 'Mapped: 191960 kB' 'Shmem: 11809304 kB' 'KReclaimable: 503384 kB' 'Slab: 1158868 kB' 'SReclaimable: 503384 kB' 'SUnreclaim: 655484 kB' 'KernelStack: 22656 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 13827948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220708 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.473 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.473 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.474 10:02:40 -- setup/common.sh@33 -- # echo 1536 00:06:07.474 10:02:40 -- setup/common.sh@33 -- # return 0 00:06:07.474 10:02:40 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:06:07.474 10:02:40 -- setup/hugepages.sh@112 -- # get_nodes 00:06:07.474 10:02:40 -- setup/hugepages.sh@27 -- # local node 00:06:07.474 10:02:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:07.474 10:02:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:07.474 10:02:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:07.474 10:02:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:07.474 10:02:40 -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:07.474 10:02:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:07.474 10:02:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:07.474 10:02:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:07.474 10:02:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:07.474 10:02:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:07.474 10:02:40 -- setup/common.sh@18 -- # local node=0 00:06:07.474 10:02:40 -- setup/common.sh@19 -- # local var val 00:06:07.474 10:02:40 -- setup/common.sh@20 -- # local mem_f mem 00:06:07.474 10:02:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:07.474 10:02:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:07.474 10:02:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:07.474 10:02:40 -- setup/common.sh@28 -- # mapfile -t mem 00:06:07.474 10:02:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 38894488 kB' 'MemUsed: 9173908 kB' 'SwapCached: 0 kB' 'Active: 6744488 kB' 'Inactive: 362940 kB' 'Active(anon): 6428480 kB' 'Inactive(anon): 0 kB' 'Active(file): 316008 kB' 'Inactive(file): 362940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6611848 kB' 'Mapped: 190400 kB' 'AnonPages: 499056 kB' 'Shmem: 5932900 kB' 'KernelStack: 11224 kB' 'PageTables: 6804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 162016 kB' 'Slab: 440156 kB' 'SReclaimable: 162016 kB' 'SUnreclaim: 278140 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.474 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.474 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@33 -- # echo 0 00:06:07.475 10:02:40 -- setup/common.sh@33 -- # return 0 00:06:07.475 10:02:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:07.475 10:02:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:07.475 10:02:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:07.475 10:02:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:07.475 10:02:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:07.475 10:02:40 -- setup/common.sh@18 -- # local node=1 00:06:07.475 10:02:40 -- setup/common.sh@19 -- # local var val 00:06:07.475 10:02:40 -- setup/common.sh@20 -- # local mem_f mem 00:06:07.475 10:02:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:07.475 10:02:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:07.475 10:02:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:07.475 10:02:40 -- setup/common.sh@28 -- # mapfile -t mem 00:06:07.475 10:02:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218204 kB' 'MemFree: 30025876 kB' 'MemUsed: 14192328 kB' 'SwapCached: 0 kB' 'Active: 6283880 kB' 'Inactive: 3317164 kB' 'Active(anon): 5975432 kB' 'Inactive(anon): 0 kB' 'Active(file): 308448 kB' 'Inactive(file): 3317164 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9502032 kB' 'Mapped: 1560 kB' 'AnonPages: 99176 kB' 'Shmem: 5876420 kB' 'KernelStack: 11480 kB' 'PageTables: 2228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 341368 kB' 'Slab: 718712 kB' 'SReclaimable: 341368 kB' 'SUnreclaim: 377344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.475 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.475 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # continue 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # IFS=': ' 00:06:07.476 10:02:40 -- setup/common.sh@31 -- # read -r var val _ 00:06:07.476 10:02:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.476 10:02:40 -- setup/common.sh@33 -- # echo 0 00:06:07.476 10:02:40 -- setup/common.sh@33 -- # return 0 00:06:07.476 10:02:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:07.476 10:02:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:07.476 10:02:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:07.476 10:02:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:07.476 10:02:40 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:07.476 node0=512 expecting 512 00:06:07.476 10:02:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:07.476 10:02:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:07.476 10:02:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:07.476 10:02:40 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:06:07.476 node1=1024 expecting 1024 00:06:07.476 10:02:40 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:06:07.476 00:06:07.476 real 0m3.099s 00:06:07.476 user 0m1.193s 00:06:07.476 sys 0m1.946s 00:06:07.476 10:02:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.476 10:02:40 -- common/autotest_common.sh@10 -- # set +x 00:06:07.476 ************************************ 00:06:07.476 END TEST custom_alloc 00:06:07.476 ************************************ 00:06:07.476 10:02:40 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:06:07.476 10:02:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:07.476 10:02:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:07.476 10:02:40 -- common/autotest_common.sh@10 -- # set +x 00:06:07.476 ************************************ 00:06:07.476 START TEST no_shrink_alloc 00:06:07.476 ************************************ 00:06:07.476 10:02:40 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:06:07.476 10:02:40 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:06:07.476 10:02:40 -- setup/hugepages.sh@49 -- # local size=2097152 00:06:07.476 10:02:40 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:07.476 10:02:40 -- setup/hugepages.sh@51 -- # shift 00:06:07.476 10:02:40 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:07.476 10:02:40 -- setup/hugepages.sh@52 -- # local node_ids 00:06:07.476 10:02:40 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:07.476 10:02:40 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:07.476 10:02:40 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:07.476 10:02:40 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:07.477 10:02:40 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:07.477 10:02:40 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:07.477 10:02:40 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:07.477 10:02:40 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:07.477 10:02:40 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:07.477 10:02:40 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:07.477 10:02:40 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:07.477 10:02:40 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:07.477 10:02:40 -- setup/hugepages.sh@73 -- # return 0 00:06:07.477 10:02:40 -- setup/hugepages.sh@198 -- # setup output 00:06:07.477 10:02:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:07.477 10:02:40 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:10.019 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:10.019 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:10.019 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:10.019 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:10.019 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:10.019 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:10.019 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:10.019 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:10.019 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:10.019 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:10.019 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:10.019 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:10.019 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:10.019 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:10.019 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:10.019 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:10.019 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:10.282 10:02:43 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:06:10.282 10:02:43 -- setup/hugepages.sh@89 -- # local node 00:06:10.282 10:02:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:10.282 10:02:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:10.282 10:02:43 -- setup/hugepages.sh@92 -- # local surp 00:06:10.282 10:02:43 -- setup/hugepages.sh@93 -- # local resv 00:06:10.282 10:02:43 -- setup/hugepages.sh@94 -- # local anon 00:06:10.282 10:02:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:10.282 10:02:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:10.282 10:02:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:10.282 10:02:43 -- setup/common.sh@18 -- # local node= 00:06:10.282 10:02:43 -- setup/common.sh@19 -- # local var val 00:06:10.282 10:02:43 -- setup/common.sh@20 -- # local mem_f mem 00:06:10.282 10:02:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:10.282 10:02:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:10.282 10:02:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:10.282 10:02:43 -- setup/common.sh@28 -- # mapfile -t mem 00:06:10.282 10:02:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:10.282 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.282 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.282 10:02:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 69949552 kB' 'MemAvailable: 73921536 kB' 'Buffers: 2696 kB' 'Cached: 16111256 kB' 'SwapCached: 0 kB' 'Active: 13029220 kB' 'Inactive: 3680104 kB' 'Active(anon): 12404764 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598776 kB' 'Mapped: 192020 kB' 'Shmem: 11809392 kB' 'KReclaimable: 503384 kB' 'Slab: 1158732 kB' 'SReclaimable: 503384 kB' 'SUnreclaim: 655348 kB' 'KernelStack: 22736 kB' 'PageTables: 9024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13828792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220756 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:06:10.282 10:02:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.282 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.283 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.283 10:02:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.283 10:02:43 -- setup/common.sh@33 -- # echo 0 00:06:10.283 10:02:43 -- setup/common.sh@33 -- # return 0 00:06:10.283 10:02:43 -- setup/hugepages.sh@97 -- # anon=0 00:06:10.283 10:02:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:10.283 10:02:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:10.283 10:02:43 -- setup/common.sh@18 -- # local node= 00:06:10.284 10:02:43 -- setup/common.sh@19 -- # local var val 00:06:10.284 10:02:43 -- setup/common.sh@20 -- # local mem_f mem 00:06:10.284 10:02:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:10.284 10:02:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:10.284 10:02:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:10.284 10:02:43 -- setup/common.sh@28 -- # mapfile -t mem 00:06:10.284 10:02:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 69950476 kB' 'MemAvailable: 73922460 kB' 'Buffers: 2696 kB' 'Cached: 16111260 kB' 'SwapCached: 0 kB' 'Active: 13028564 kB' 'Inactive: 3680104 kB' 'Active(anon): 12404108 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598140 kB' 'Mapped: 192000 kB' 'Shmem: 11809396 kB' 'KReclaimable: 503384 kB' 'Slab: 1158776 kB' 'SReclaimable: 503384 kB' 'SUnreclaim: 655392 kB' 'KernelStack: 22704 kB' 'PageTables: 8912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13828804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220724 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.284 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.284 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.285 10:02:43 -- setup/common.sh@33 -- # echo 0 00:06:10.285 10:02:43 -- setup/common.sh@33 -- # return 0 00:06:10.285 10:02:43 -- setup/hugepages.sh@99 -- # surp=0 00:06:10.285 10:02:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:10.285 10:02:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:10.285 10:02:43 -- setup/common.sh@18 -- # local node= 00:06:10.285 10:02:43 -- setup/common.sh@19 -- # local var val 00:06:10.285 10:02:43 -- setup/common.sh@20 -- # local mem_f mem 00:06:10.285 10:02:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:10.285 10:02:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:10.285 10:02:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:10.285 10:02:43 -- setup/common.sh@28 -- # mapfile -t mem 00:06:10.285 10:02:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 69950308 kB' 'MemAvailable: 73922292 kB' 'Buffers: 2696 kB' 'Cached: 16111264 kB' 'SwapCached: 0 kB' 'Active: 13028260 kB' 'Inactive: 3680104 kB' 'Active(anon): 12403804 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597832 kB' 'Mapped: 192000 kB' 'Shmem: 11809400 kB' 'KReclaimable: 503384 kB' 'Slab: 1158776 kB' 'SReclaimable: 503384 kB' 'SUnreclaim: 655392 kB' 'KernelStack: 22704 kB' 'PageTables: 8912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13828820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220724 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.285 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.285 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.286 10:02:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.286 10:02:43 -- setup/common.sh@33 -- # echo 0 00:06:10.286 10:02:43 -- setup/common.sh@33 -- # return 0 00:06:10.286 10:02:43 -- setup/hugepages.sh@100 -- # resv=0 00:06:10.286 10:02:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:10.286 nr_hugepages=1024 00:06:10.286 10:02:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:10.286 resv_hugepages=0 00:06:10.286 10:02:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:10.286 surplus_hugepages=0 00:06:10.286 10:02:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:10.286 anon_hugepages=0 00:06:10.286 10:02:43 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:10.286 10:02:43 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:10.286 10:02:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:10.286 10:02:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:10.286 10:02:43 -- setup/common.sh@18 -- # local node= 00:06:10.286 10:02:43 -- setup/common.sh@19 -- # local var val 00:06:10.286 10:02:43 -- setup/common.sh@20 -- # local mem_f mem 00:06:10.286 10:02:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:10.286 10:02:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:10.286 10:02:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:10.286 10:02:43 -- setup/common.sh@28 -- # mapfile -t mem 00:06:10.286 10:02:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.286 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 69950968 kB' 'MemAvailable: 73922952 kB' 'Buffers: 2696 kB' 'Cached: 16111296 kB' 'SwapCached: 0 kB' 'Active: 13028244 kB' 'Inactive: 3680104 kB' 'Active(anon): 12403788 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 597756 kB' 'Mapped: 192000 kB' 'Shmem: 11809432 kB' 'KReclaimable: 503384 kB' 'Slab: 1158776 kB' 'SReclaimable: 503384 kB' 'SUnreclaim: 655392 kB' 'KernelStack: 22688 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13828836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220724 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.287 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.287 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.288 10:02:43 -- setup/common.sh@33 -- # echo 1024 00:06:10.288 10:02:43 -- setup/common.sh@33 -- # return 0 00:06:10.288 10:02:43 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:10.288 10:02:43 -- setup/hugepages.sh@112 -- # get_nodes 00:06:10.288 10:02:43 -- setup/hugepages.sh@27 -- # local node 00:06:10.288 10:02:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:10.288 10:02:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:10.288 10:02:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:10.288 10:02:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:06:10.288 10:02:43 -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:10.288 10:02:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:10.288 10:02:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:10.288 10:02:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:10.288 10:02:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:10.288 10:02:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:10.288 10:02:43 -- setup/common.sh@18 -- # local node=0 00:06:10.288 10:02:43 -- setup/common.sh@19 -- # local var val 00:06:10.288 10:02:43 -- setup/common.sh@20 -- # local mem_f mem 00:06:10.288 10:02:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:10.288 10:02:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:10.288 10:02:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:10.288 10:02:43 -- setup/common.sh@28 -- # mapfile -t mem 00:06:10.288 10:02:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 37825628 kB' 'MemUsed: 10242768 kB' 'SwapCached: 0 kB' 'Active: 6744524 kB' 'Inactive: 362940 kB' 'Active(anon): 6428516 kB' 'Inactive(anon): 0 kB' 'Active(file): 316008 kB' 'Inactive(file): 362940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6611908 kB' 'Mapped: 190440 kB' 'AnonPages: 498748 kB' 'Shmem: 5932960 kB' 'KernelStack: 11208 kB' 'PageTables: 6664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 162016 kB' 'Slab: 440276 kB' 'SReclaimable: 162016 kB' 'SUnreclaim: 278260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.288 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.288 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # continue 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # IFS=': ' 00:06:10.289 10:02:43 -- setup/common.sh@31 -- # read -r var val _ 00:06:10.289 10:02:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.289 10:02:43 -- setup/common.sh@33 -- # echo 0 00:06:10.289 10:02:43 -- setup/common.sh@33 -- # return 0 00:06:10.289 10:02:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:10.289 10:02:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:10.289 10:02:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:10.289 10:02:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:10.289 10:02:43 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:10.289 node0=1024 expecting 1024 00:06:10.289 10:02:43 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:10.289 10:02:43 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:06:10.289 10:02:43 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:06:10.289 10:02:43 -- setup/hugepages.sh@202 -- # setup output 00:06:10.289 10:02:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:10.289 10:02:43 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:13.583 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:13.583 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:13.583 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:13.583 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:13.583 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:13.583 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:13.583 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:13.583 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:13.583 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:13.583 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:13.583 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:13.583 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:13.583 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:13.583 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:13.583 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:13.583 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:13.583 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:13.583 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:06:13.583 10:02:46 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:06:13.583 10:02:46 -- setup/hugepages.sh@89 -- # local node 00:06:13.583 10:02:46 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:13.583 10:02:46 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:13.583 10:02:46 -- setup/hugepages.sh@92 -- # local surp 00:06:13.583 10:02:46 -- setup/hugepages.sh@93 -- # local resv 00:06:13.583 10:02:46 -- setup/hugepages.sh@94 -- # local anon 00:06:13.583 10:02:46 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:13.583 10:02:46 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:13.583 10:02:46 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:13.583 10:02:46 -- setup/common.sh@18 -- # local node= 00:06:13.583 10:02:46 -- setup/common.sh@19 -- # local var val 00:06:13.583 10:02:46 -- setup/common.sh@20 -- # local mem_f mem 00:06:13.583 10:02:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:13.583 10:02:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:13.583 10:02:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:13.583 10:02:46 -- setup/common.sh@28 -- # mapfile -t mem 00:06:13.583 10:02:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.583 10:02:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 69947356 kB' 'MemAvailable: 73919340 kB' 'Buffers: 2696 kB' 'Cached: 16111376 kB' 'SwapCached: 0 kB' 'Active: 13029732 kB' 'Inactive: 3680104 kB' 'Active(anon): 12405276 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599100 kB' 'Mapped: 191916 kB' 'Shmem: 11809512 kB' 'KReclaimable: 503384 kB' 'Slab: 1158776 kB' 'SReclaimable: 503384 kB' 'SUnreclaim: 655392 kB' 'KernelStack: 22768 kB' 'PageTables: 9108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13829440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220804 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.583 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.583 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:13.584 10:02:46 -- setup/common.sh@33 -- # echo 0 00:06:13.584 10:02:46 -- setup/common.sh@33 -- # return 0 00:06:13.584 10:02:46 -- setup/hugepages.sh@97 -- # anon=0 00:06:13.584 10:02:46 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:13.584 10:02:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:13.584 10:02:46 -- setup/common.sh@18 -- # local node= 00:06:13.584 10:02:46 -- setup/common.sh@19 -- # local var val 00:06:13.584 10:02:46 -- setup/common.sh@20 -- # local mem_f mem 00:06:13.584 10:02:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:13.584 10:02:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:13.584 10:02:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:13.584 10:02:46 -- setup/common.sh@28 -- # mapfile -t mem 00:06:13.584 10:02:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 69957084 kB' 'MemAvailable: 73929176 kB' 'Buffers: 2696 kB' 'Cached: 16111380 kB' 'SwapCached: 0 kB' 'Active: 13029680 kB' 'Inactive: 3680104 kB' 'Active(anon): 12405224 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599148 kB' 'Mapped: 191916 kB' 'Shmem: 11809516 kB' 'KReclaimable: 503384 kB' 'Slab: 1158820 kB' 'SReclaimable: 503384 kB' 'SUnreclaim: 655436 kB' 'KernelStack: 22784 kB' 'PageTables: 9144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13832484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220740 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.584 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.584 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.585 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.585 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.586 10:02:46 -- setup/common.sh@33 -- # echo 0 00:06:13.586 10:02:46 -- setup/common.sh@33 -- # return 0 00:06:13.586 10:02:46 -- setup/hugepages.sh@99 -- # surp=0 00:06:13.586 10:02:46 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:13.586 10:02:46 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:13.586 10:02:46 -- setup/common.sh@18 -- # local node= 00:06:13.586 10:02:46 -- setup/common.sh@19 -- # local var val 00:06:13.586 10:02:46 -- setup/common.sh@20 -- # local mem_f mem 00:06:13.586 10:02:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:13.586 10:02:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:13.586 10:02:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:13.586 10:02:46 -- setup/common.sh@28 -- # mapfile -t mem 00:06:13.586 10:02:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:13.586 10:02:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 69957968 kB' 'MemAvailable: 73929920 kB' 'Buffers: 2696 kB' 'Cached: 16111392 kB' 'SwapCached: 0 kB' 'Active: 13029420 kB' 'Inactive: 3680104 kB' 'Active(anon): 12404964 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 598872 kB' 'Mapped: 191916 kB' 'Shmem: 11809528 kB' 'KReclaimable: 503352 kB' 'Slab: 1158780 kB' 'SReclaimable: 503352 kB' 'SUnreclaim: 655428 kB' 'KernelStack: 22848 kB' 'PageTables: 8968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13832500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220788 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.586 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.586 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:13.587 10:02:46 -- setup/common.sh@33 -- # echo 0 00:06:13.587 10:02:46 -- setup/common.sh@33 -- # return 0 00:06:13.587 10:02:46 -- setup/hugepages.sh@100 -- # resv=0 00:06:13.587 10:02:46 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:13.587 nr_hugepages=1024 00:06:13.587 10:02:46 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:13.587 resv_hugepages=0 00:06:13.587 10:02:46 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:13.587 surplus_hugepages=0 00:06:13.587 10:02:46 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:13.587 anon_hugepages=0 00:06:13.587 10:02:46 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:13.587 10:02:46 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:13.587 10:02:46 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:13.587 10:02:46 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:13.587 10:02:46 -- setup/common.sh@18 -- # local node= 00:06:13.587 10:02:46 -- setup/common.sh@19 -- # local var val 00:06:13.587 10:02:46 -- setup/common.sh@20 -- # local mem_f mem 00:06:13.587 10:02:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:13.587 10:02:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:13.587 10:02:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:13.587 10:02:46 -- setup/common.sh@28 -- # mapfile -t mem 00:06:13.587 10:02:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286600 kB' 'MemFree: 69957692 kB' 'MemAvailable: 73929644 kB' 'Buffers: 2696 kB' 'Cached: 16111404 kB' 'SwapCached: 0 kB' 'Active: 13030000 kB' 'Inactive: 3680104 kB' 'Active(anon): 12405544 kB' 'Inactive(anon): 0 kB' 'Active(file): 624456 kB' 'Inactive(file): 3680104 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599424 kB' 'Mapped: 191916 kB' 'Shmem: 11809540 kB' 'KReclaimable: 503352 kB' 'Slab: 1158780 kB' 'SReclaimable: 503352 kB' 'SUnreclaim: 655428 kB' 'KernelStack: 22912 kB' 'PageTables: 9444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 13834032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220868 kB' 'VmallocChunk: 0 kB' 'Percpu: 116480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3984340 kB' 'DirectMap2M: 41832448 kB' 'DirectMap1G: 55574528 kB' 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.587 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.587 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.588 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.588 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:13.589 10:02:46 -- setup/common.sh@33 -- # echo 1024 00:06:13.589 10:02:46 -- setup/common.sh@33 -- # return 0 00:06:13.589 10:02:46 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:13.589 10:02:46 -- setup/hugepages.sh@112 -- # get_nodes 00:06:13.589 10:02:46 -- setup/hugepages.sh@27 -- # local node 00:06:13.589 10:02:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:13.589 10:02:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:13.589 10:02:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:13.589 10:02:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:06:13.589 10:02:46 -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:13.589 10:02:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:13.589 10:02:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:13.589 10:02:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:13.589 10:02:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:13.589 10:02:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:13.589 10:02:46 -- setup/common.sh@18 -- # local node=0 00:06:13.589 10:02:46 -- setup/common.sh@19 -- # local var val 00:06:13.589 10:02:46 -- setup/common.sh@20 -- # local mem_f mem 00:06:13.589 10:02:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:13.589 10:02:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:13.589 10:02:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:13.589 10:02:46 -- setup/common.sh@28 -- # mapfile -t mem 00:06:13.589 10:02:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 37811116 kB' 'MemUsed: 10257280 kB' 'SwapCached: 0 kB' 'Active: 6745032 kB' 'Inactive: 362940 kB' 'Active(anon): 6429024 kB' 'Inactive(anon): 0 kB' 'Active(file): 316008 kB' 'Inactive(file): 362940 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6611916 kB' 'Mapped: 190416 kB' 'AnonPages: 499136 kB' 'Shmem: 5932968 kB' 'KernelStack: 11448 kB' 'PageTables: 7324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 161984 kB' 'Slab: 440392 kB' 'SReclaimable: 161984 kB' 'SUnreclaim: 278408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.589 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.589 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # continue 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # IFS=': ' 00:06:13.590 10:02:46 -- setup/common.sh@31 -- # read -r var val _ 00:06:13.590 10:02:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:13.590 10:02:46 -- setup/common.sh@33 -- # echo 0 00:06:13.590 10:02:46 -- setup/common.sh@33 -- # return 0 00:06:13.590 10:02:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:13.590 10:02:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:13.590 10:02:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:13.590 10:02:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:13.590 10:02:46 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:13.590 node0=1024 expecting 1024 00:06:13.590 10:02:46 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:13.590 00:06:13.590 real 0m6.175s 00:06:13.590 user 0m2.446s 00:06:13.590 sys 0m3.814s 00:06:13.590 10:02:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.590 10:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:13.590 ************************************ 00:06:13.590 END TEST no_shrink_alloc 00:06:13.590 ************************************ 00:06:13.590 10:02:46 -- setup/hugepages.sh@217 -- # clear_hp 00:06:13.590 10:02:46 -- setup/hugepages.sh@37 -- # local node hp 00:06:13.590 10:02:46 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:13.590 10:02:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:13.590 10:02:46 -- setup/hugepages.sh@41 -- # echo 0 00:06:13.590 10:02:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:13.590 10:02:46 -- setup/hugepages.sh@41 -- # echo 0 00:06:13.590 10:02:46 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:13.590 10:02:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:13.590 10:02:46 -- setup/hugepages.sh@41 -- # echo 0 00:06:13.590 10:02:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:13.590 10:02:46 -- setup/hugepages.sh@41 -- # echo 0 00:06:13.590 10:02:46 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:13.590 10:02:46 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:13.590 00:06:13.590 real 0m23.052s 00:06:13.590 user 0m8.827s 00:06:13.590 sys 0m13.736s 00:06:13.590 10:02:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.590 10:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:13.590 ************************************ 00:06:13.590 END TEST hugepages 00:06:13.590 ************************************ 00:06:13.590 10:02:46 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:06:13.590 10:02:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:13.590 10:02:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.590 10:02:46 -- common/autotest_common.sh@10 -- # set +x 00:06:13.590 ************************************ 00:06:13.590 START TEST driver 00:06:13.590 ************************************ 00:06:13.590 10:02:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:06:13.590 * Looking for test storage... 00:06:13.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:06:13.590 10:02:46 -- setup/driver.sh@68 -- # setup reset 00:06:13.590 10:02:46 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:13.590 10:02:46 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:17.782 10:02:50 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:06:17.782 10:02:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:17.782 10:02:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.782 10:02:50 -- common/autotest_common.sh@10 -- # set +x 00:06:17.782 ************************************ 00:06:17.782 START TEST guess_driver 00:06:17.782 ************************************ 00:06:17.782 10:02:50 -- common/autotest_common.sh@1104 -- # guess_driver 00:06:17.782 10:02:50 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:06:17.782 10:02:50 -- setup/driver.sh@47 -- # local fail=0 00:06:17.782 10:02:50 -- setup/driver.sh@49 -- # pick_driver 00:06:17.782 10:02:50 -- setup/driver.sh@36 -- # vfio 00:06:17.782 10:02:50 -- setup/driver.sh@21 -- # local iommu_grups 00:06:17.782 10:02:50 -- setup/driver.sh@22 -- # local unsafe_vfio 00:06:17.782 10:02:50 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:06:17.782 10:02:50 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:06:17.782 10:02:50 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:06:17.782 10:02:50 -- setup/driver.sh@29 -- # (( 175 > 0 )) 00:06:17.782 10:02:50 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:06:17.782 10:02:50 -- setup/driver.sh@14 -- # mod vfio_pci 00:06:17.782 10:02:50 -- setup/driver.sh@12 -- # dep vfio_pci 00:06:17.782 10:02:50 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:06:17.783 10:02:50 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:06:17.783 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:06:17.783 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:06:17.783 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:06:17.783 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:06:17.783 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:06:17.783 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:06:17.783 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:06:17.783 10:02:50 -- setup/driver.sh@30 -- # return 0 00:06:17.783 10:02:50 -- setup/driver.sh@37 -- # echo vfio-pci 00:06:17.783 10:02:50 -- setup/driver.sh@49 -- # driver=vfio-pci 00:06:17.783 10:02:50 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:06:17.783 10:02:50 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:06:17.783 Looking for driver=vfio-pci 00:06:17.783 10:02:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:17.783 10:02:50 -- setup/driver.sh@45 -- # setup output config 00:06:17.783 10:02:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:17.783 10:02:50 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:20.316 10:02:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:20.316 10:02:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:20.316 10:02:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:20.576 10:02:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:20.576 10:02:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:20.576 10:02:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:20.576 10:02:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:20.576 10:02:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:20.576 10:02:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:20.576 10:02:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:20.576 10:02:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:20.576 10:02:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:20.576 10:02:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:20.576 10:02:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:20.576 10:02:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:20.576 10:02:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:20.576 10:02:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:20.576 10:02:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:20.576 10:02:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:20.576 10:02:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:20.576 10:02:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:20.576 10:02:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:20.576 10:02:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:20.576 10:02:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:20.576 10:02:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:20.576 10:02:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:20.576 10:02:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:20.576 10:02:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:20.576 10:02:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:20.576 10:02:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:20.576 10:02:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:20.576 10:02:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:20.576 10:02:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:20.576 10:02:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:20.576 10:02:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:20.576 10:02:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:20.576 10:02:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:20.576 10:02:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:20.576 10:02:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:20.576 10:02:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:20.576 10:02:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:20.576 10:02:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:20.576 10:02:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:20.576 10:02:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:20.576 10:02:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:20.576 10:02:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:20.576 10:02:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:20.576 10:02:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:21.515 10:02:54 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:21.515 10:02:54 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:21.515 10:02:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:21.515 10:02:54 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:06:21.515 10:02:54 -- setup/driver.sh@65 -- # setup reset 00:06:21.515 10:02:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:21.515 10:02:54 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:25.708 00:06:25.708 real 0m8.111s 00:06:25.708 user 0m2.347s 00:06:25.708 sys 0m4.123s 00:06:25.708 10:02:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.708 10:02:58 -- common/autotest_common.sh@10 -- # set +x 00:06:25.708 ************************************ 00:06:25.708 END TEST guess_driver 00:06:25.708 ************************************ 00:06:25.708 00:06:25.708 real 0m12.267s 00:06:25.708 user 0m3.512s 00:06:25.708 sys 0m6.274s 00:06:25.708 10:02:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.708 10:02:58 -- common/autotest_common.sh@10 -- # set +x 00:06:25.708 ************************************ 00:06:25.708 END TEST driver 00:06:25.708 ************************************ 00:06:25.708 10:02:58 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:06:25.708 10:02:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:25.708 10:02:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:25.708 10:02:58 -- common/autotest_common.sh@10 -- # set +x 00:06:25.708 ************************************ 00:06:25.708 START TEST devices 00:06:25.708 ************************************ 00:06:25.708 10:02:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:06:25.966 * Looking for test storage... 00:06:25.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:06:25.966 10:02:59 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:25.966 10:02:59 -- setup/devices.sh@192 -- # setup reset 00:06:25.966 10:02:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:25.966 10:02:59 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:29.252 10:03:02 -- setup/devices.sh@194 -- # get_zoned_devs 00:06:29.252 10:03:02 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:06:29.252 10:03:02 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:06:29.252 10:03:02 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:06:29.252 10:03:02 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:06:29.252 10:03:02 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:06:29.252 10:03:02 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:06:29.252 10:03:02 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:29.252 10:03:02 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:06:29.252 10:03:02 -- setup/devices.sh@196 -- # blocks=() 00:06:29.252 10:03:02 -- setup/devices.sh@196 -- # declare -a blocks 00:06:29.252 10:03:02 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:29.252 10:03:02 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:29.252 10:03:02 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:29.252 10:03:02 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:29.252 10:03:02 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:29.252 10:03:02 -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:29.252 10:03:02 -- setup/devices.sh@202 -- # pci=0000:86:00.0 00:06:29.252 10:03:02 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\6\:\0\0\.\0* ]] 00:06:29.253 10:03:02 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:29.253 10:03:02 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:06:29.253 10:03:02 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:06:29.253 No valid GPT data, bailing 00:06:29.253 10:03:02 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:29.253 10:03:02 -- scripts/common.sh@393 -- # pt= 00:06:29.253 10:03:02 -- scripts/common.sh@394 -- # return 1 00:06:29.253 10:03:02 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:29.253 10:03:02 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:29.253 10:03:02 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:29.253 10:03:02 -- setup/common.sh@80 -- # echo 1000204886016 00:06:29.253 10:03:02 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:06:29.253 10:03:02 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:29.253 10:03:02 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:86:00.0 00:06:29.253 10:03:02 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:06:29.253 10:03:02 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:29.253 10:03:02 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:29.253 10:03:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:29.253 10:03:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:29.253 10:03:02 -- common/autotest_common.sh@10 -- # set +x 00:06:29.253 ************************************ 00:06:29.253 START TEST nvme_mount 00:06:29.253 ************************************ 00:06:29.253 10:03:02 -- common/autotest_common.sh@1104 -- # nvme_mount 00:06:29.253 10:03:02 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:29.253 10:03:02 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:29.253 10:03:02 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:29.253 10:03:02 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:29.253 10:03:02 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:29.253 10:03:02 -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:29.253 10:03:02 -- setup/common.sh@40 -- # local part_no=1 00:06:29.253 10:03:02 -- setup/common.sh@41 -- # local size=1073741824 00:06:29.253 10:03:02 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:29.253 10:03:02 -- setup/common.sh@44 -- # parts=() 00:06:29.253 10:03:02 -- setup/common.sh@44 -- # local parts 00:06:29.253 10:03:02 -- setup/common.sh@46 -- # (( part = 1 )) 00:06:29.253 10:03:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:29.253 10:03:02 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:29.253 10:03:02 -- setup/common.sh@46 -- # (( part++ )) 00:06:29.253 10:03:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:29.253 10:03:02 -- setup/common.sh@51 -- # (( size /= 512 )) 00:06:29.253 10:03:02 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:29.253 10:03:02 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:30.190 Creating new GPT entries in memory. 00:06:30.190 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:30.190 other utilities. 00:06:30.190 10:03:03 -- setup/common.sh@57 -- # (( part = 1 )) 00:06:30.190 10:03:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:30.190 10:03:03 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:30.190 10:03:03 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:30.190 10:03:03 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:06:31.127 Creating new GPT entries in memory. 00:06:31.127 The operation has completed successfully. 00:06:31.127 10:03:04 -- setup/common.sh@57 -- # (( part++ )) 00:06:31.127 10:03:04 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:31.127 10:03:04 -- setup/common.sh@62 -- # wait 3250326 00:06:31.127 10:03:04 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:31.127 10:03:04 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:06:31.127 10:03:04 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:31.127 10:03:04 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:31.127 10:03:04 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:31.127 10:03:04 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:31.387 10:03:04 -- setup/devices.sh@105 -- # verify 0000:86:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:31.387 10:03:04 -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:06:31.387 10:03:04 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:31.387 10:03:04 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:31.387 10:03:04 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:31.387 10:03:04 -- setup/devices.sh@53 -- # local found=0 00:06:31.387 10:03:04 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:31.387 10:03:04 -- setup/devices.sh@56 -- # : 00:06:31.387 10:03:04 -- setup/devices.sh@59 -- # local pci status 00:06:31.387 10:03:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:31.387 10:03:04 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:06:31.387 10:03:04 -- setup/devices.sh@47 -- # setup output config 00:06:31.387 10:03:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:31.387 10:03:04 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:33.925 10:03:07 -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:33.925 10:03:07 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:33.925 10:03:07 -- setup/devices.sh@63 -- # found=1 00:06:33.925 10:03:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.925 10:03:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:33.925 10:03:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.925 10:03:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:33.925 10:03:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.925 10:03:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:33.925 10:03:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.925 10:03:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:33.925 10:03:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.925 10:03:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:33.925 10:03:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.925 10:03:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:33.925 10:03:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.925 10:03:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:33.925 10:03:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.925 10:03:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:33.926 10:03:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.926 10:03:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:33.926 10:03:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.926 10:03:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:33.926 10:03:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.926 10:03:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:33.926 10:03:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.926 10:03:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:33.926 10:03:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.926 10:03:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:33.926 10:03:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.926 10:03:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:33.926 10:03:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.926 10:03:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:33.926 10:03:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:33.926 10:03:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:33.926 10:03:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:34.185 10:03:07 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:34.185 10:03:07 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:34.185 10:03:07 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:34.185 10:03:07 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:34.185 10:03:07 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:34.185 10:03:07 -- setup/devices.sh@110 -- # cleanup_nvme 00:06:34.185 10:03:07 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:34.185 10:03:07 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:34.185 10:03:07 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:34.185 10:03:07 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:34.185 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:34.185 10:03:07 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:34.185 10:03:07 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:34.444 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:34.444 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:06:34.444 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:34.444 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:34.444 10:03:07 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:06:34.445 10:03:07 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:06:34.445 10:03:07 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:34.445 10:03:07 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:34.445 10:03:07 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:34.445 10:03:07 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:34.445 10:03:07 -- setup/devices.sh@116 -- # verify 0000:86:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:34.445 10:03:07 -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:06:34.445 10:03:07 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:34.445 10:03:07 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:34.445 10:03:07 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:34.445 10:03:07 -- setup/devices.sh@53 -- # local found=0 00:06:34.445 10:03:07 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:34.445 10:03:07 -- setup/devices.sh@56 -- # : 00:06:34.445 10:03:07 -- setup/devices.sh@59 -- # local pci status 00:06:34.445 10:03:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:34.445 10:03:07 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:06:34.445 10:03:07 -- setup/devices.sh@47 -- # setup output config 00:06:34.445 10:03:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:34.445 10:03:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:36.982 10:03:10 -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:36.982 10:03:10 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:36.982 10:03:10 -- setup/devices.sh@63 -- # found=1 00:06:36.982 10:03:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:37.241 10:03:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:37.241 10:03:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:37.242 10:03:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:37.242 10:03:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:37.242 10:03:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:37.242 10:03:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:37.242 10:03:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:37.242 10:03:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:37.242 10:03:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:37.242 10:03:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:37.242 10:03:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:37.242 10:03:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:37.242 10:03:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:37.242 10:03:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:37.242 10:03:10 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:37.242 10:03:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:37.242 10:03:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:37.242 10:03:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:37.242 10:03:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:37.242 10:03:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:37.242 10:03:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:37.242 10:03:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:37.242 10:03:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:37.242 10:03:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:37.242 10:03:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:37.242 10:03:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:37.242 10:03:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:37.242 10:03:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:37.242 10:03:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:37.242 10:03:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:37.242 10:03:10 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:37.242 10:03:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:37.242 10:03:10 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:37.242 10:03:10 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:37.242 10:03:10 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:37.242 10:03:10 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:37.242 10:03:10 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:37.242 10:03:10 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:37.242 10:03:10 -- setup/devices.sh@125 -- # verify 0000:86:00.0 data@nvme0n1 '' '' 00:06:37.242 10:03:10 -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:06:37.242 10:03:10 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:37.242 10:03:10 -- setup/devices.sh@50 -- # local mount_point= 00:06:37.242 10:03:10 -- setup/devices.sh@51 -- # local test_file= 00:06:37.242 10:03:10 -- setup/devices.sh@53 -- # local found=0 00:06:37.242 10:03:10 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:37.242 10:03:10 -- setup/devices.sh@59 -- # local pci status 00:06:37.242 10:03:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:37.242 10:03:10 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:06:37.242 10:03:10 -- setup/devices.sh@47 -- # setup output config 00:06:37.242 10:03:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:37.242 10:03:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:40.530 10:03:13 -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:40.530 10:03:13 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:40.530 10:03:13 -- setup/devices.sh@63 -- # found=1 00:06:40.530 10:03:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:40.530 10:03:13 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:40.530 10:03:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:40.530 10:03:13 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:40.530 10:03:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:40.530 10:03:13 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:40.530 10:03:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:40.530 10:03:13 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:40.530 10:03:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:40.530 10:03:13 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:40.530 10:03:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:40.530 10:03:13 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:40.530 10:03:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:40.530 10:03:13 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:40.530 10:03:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:40.530 10:03:13 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:40.530 10:03:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:40.530 10:03:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:40.530 10:03:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:40.530 10:03:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:40.530 10:03:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:40.530 10:03:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:40.530 10:03:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:40.530 10:03:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:40.530 10:03:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:40.530 10:03:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:40.530 10:03:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:40.530 10:03:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:40.530 10:03:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:40.531 10:03:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:40.531 10:03:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:40.531 10:03:13 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:40.531 10:03:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:40.531 10:03:13 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:40.531 10:03:13 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:40.531 10:03:13 -- setup/devices.sh@68 -- # return 0 00:06:40.531 10:03:13 -- setup/devices.sh@128 -- # cleanup_nvme 00:06:40.531 10:03:13 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:40.531 10:03:13 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:40.531 10:03:13 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:40.531 10:03:13 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:40.531 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:40.531 00:06:40.531 real 0m11.133s 00:06:40.531 user 0m3.236s 00:06:40.531 sys 0m5.744s 00:06:40.531 10:03:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.531 10:03:13 -- common/autotest_common.sh@10 -- # set +x 00:06:40.531 ************************************ 00:06:40.531 END TEST nvme_mount 00:06:40.531 ************************************ 00:06:40.531 10:03:13 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:40.531 10:03:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:40.531 10:03:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.531 10:03:13 -- common/autotest_common.sh@10 -- # set +x 00:06:40.531 ************************************ 00:06:40.531 START TEST dm_mount 00:06:40.531 ************************************ 00:06:40.531 10:03:13 -- common/autotest_common.sh@1104 -- # dm_mount 00:06:40.531 10:03:13 -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:40.531 10:03:13 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:40.531 10:03:13 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:40.531 10:03:13 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:40.531 10:03:13 -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:40.531 10:03:13 -- setup/common.sh@40 -- # local part_no=2 00:06:40.531 10:03:13 -- setup/common.sh@41 -- # local size=1073741824 00:06:40.531 10:03:13 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:40.531 10:03:13 -- setup/common.sh@44 -- # parts=() 00:06:40.531 10:03:13 -- setup/common.sh@44 -- # local parts 00:06:40.531 10:03:13 -- setup/common.sh@46 -- # (( part = 1 )) 00:06:40.531 10:03:13 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:40.531 10:03:13 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:40.531 10:03:13 -- setup/common.sh@46 -- # (( part++ )) 00:06:40.531 10:03:13 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:40.531 10:03:13 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:40.531 10:03:13 -- setup/common.sh@46 -- # (( part++ )) 00:06:40.531 10:03:13 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:40.531 10:03:13 -- setup/common.sh@51 -- # (( size /= 512 )) 00:06:40.531 10:03:13 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:40.531 10:03:13 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:41.472 Creating new GPT entries in memory. 00:06:41.472 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:41.472 other utilities. 00:06:41.472 10:03:14 -- setup/common.sh@57 -- # (( part = 1 )) 00:06:41.472 10:03:14 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:41.472 10:03:14 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:41.472 10:03:14 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:41.472 10:03:14 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:06:42.409 Creating new GPT entries in memory. 00:06:42.409 The operation has completed successfully. 00:06:42.409 10:03:15 -- setup/common.sh@57 -- # (( part++ )) 00:06:42.409 10:03:15 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:42.409 10:03:15 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:42.409 10:03:15 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:42.409 10:03:15 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:06:43.347 The operation has completed successfully. 00:06:43.347 10:03:16 -- setup/common.sh@57 -- # (( part++ )) 00:06:43.347 10:03:16 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:43.347 10:03:16 -- setup/common.sh@62 -- # wait 3254988 00:06:43.347 10:03:16 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:43.347 10:03:16 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:43.347 10:03:16 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:43.347 10:03:16 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:43.347 10:03:16 -- setup/devices.sh@160 -- # for t in {1..5} 00:06:43.347 10:03:16 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:43.347 10:03:16 -- setup/devices.sh@161 -- # break 00:06:43.347 10:03:16 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:43.347 10:03:16 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:43.347 10:03:16 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:43.347 10:03:16 -- setup/devices.sh@166 -- # dm=dm-0 00:06:43.347 10:03:16 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:43.347 10:03:16 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:43.347 10:03:16 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:43.347 10:03:16 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:06:43.347 10:03:16 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:43.347 10:03:16 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:43.347 10:03:16 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:43.347 10:03:16 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:43.607 10:03:16 -- setup/devices.sh@174 -- # verify 0000:86:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:43.607 10:03:16 -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:06:43.607 10:03:16 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:43.607 10:03:16 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:43.607 10:03:16 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:43.607 10:03:16 -- setup/devices.sh@53 -- # local found=0 00:06:43.607 10:03:16 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:43.607 10:03:16 -- setup/devices.sh@56 -- # : 00:06:43.607 10:03:16 -- setup/devices.sh@59 -- # local pci status 00:06:43.607 10:03:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:43.607 10:03:16 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:06:43.607 10:03:16 -- setup/devices.sh@47 -- # setup output config 00:06:43.607 10:03:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:43.607 10:03:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:46.143 10:03:19 -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:46.143 10:03:19 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:46.143 10:03:19 -- setup/devices.sh@63 -- # found=1 00:06:46.143 10:03:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.143 10:03:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:46.143 10:03:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.143 10:03:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:46.143 10:03:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.143 10:03:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:46.143 10:03:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.143 10:03:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:46.143 10:03:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.143 10:03:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:46.143 10:03:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.143 10:03:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:46.143 10:03:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.143 10:03:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:46.143 10:03:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.143 10:03:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:46.143 10:03:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.143 10:03:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:46.143 10:03:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.143 10:03:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:46.143 10:03:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.143 10:03:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:46.143 10:03:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.143 10:03:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:46.143 10:03:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.143 10:03:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:46.143 10:03:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.143 10:03:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:46.143 10:03:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.143 10:03:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:46.143 10:03:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.143 10:03:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:46.143 10:03:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.403 10:03:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:46.403 10:03:19 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:06:46.403 10:03:19 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:46.403 10:03:19 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:46.403 10:03:19 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:46.403 10:03:19 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:46.403 10:03:19 -- setup/devices.sh@184 -- # verify 0000:86:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:46.403 10:03:19 -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:06:46.403 10:03:19 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:46.403 10:03:19 -- setup/devices.sh@50 -- # local mount_point= 00:06:46.403 10:03:19 -- setup/devices.sh@51 -- # local test_file= 00:06:46.403 10:03:19 -- setup/devices.sh@53 -- # local found=0 00:06:46.403 10:03:19 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:46.403 10:03:19 -- setup/devices.sh@59 -- # local pci status 00:06:46.403 10:03:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:46.403 10:03:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:06:46.403 10:03:19 -- setup/devices.sh@47 -- # setup output config 00:06:46.403 10:03:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:46.403 10:03:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:48.940 10:03:21 -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:48.940 10:03:21 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:48.940 10:03:21 -- setup/devices.sh@63 -- # found=1 00:06:48.940 10:03:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.940 10:03:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:48.940 10:03:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.940 10:03:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:48.940 10:03:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.940 10:03:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:48.940 10:03:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.940 10:03:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:48.940 10:03:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.940 10:03:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:48.940 10:03:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.940 10:03:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:48.940 10:03:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.940 10:03:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:48.940 10:03:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.940 10:03:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:48.940 10:03:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.940 10:03:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:48.940 10:03:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.940 10:03:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:48.940 10:03:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.940 10:03:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:48.940 10:03:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.940 10:03:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:48.940 10:03:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.940 10:03:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:48.940 10:03:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.940 10:03:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:48.940 10:03:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.940 10:03:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:48.940 10:03:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.940 10:03:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:06:48.940 10:03:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:48.940 10:03:22 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:48.940 10:03:22 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:48.940 10:03:22 -- setup/devices.sh@68 -- # return 0 00:06:48.940 10:03:22 -- setup/devices.sh@187 -- # cleanup_dm 00:06:48.940 10:03:22 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:48.940 10:03:22 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:48.940 10:03:22 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:48.940 10:03:22 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:48.940 10:03:22 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:48.940 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:48.940 10:03:22 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:48.940 10:03:22 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:48.940 00:06:48.940 real 0m8.618s 00:06:48.940 user 0m1.966s 00:06:48.940 sys 0m3.656s 00:06:48.940 10:03:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.940 10:03:22 -- common/autotest_common.sh@10 -- # set +x 00:06:48.940 ************************************ 00:06:48.940 END TEST dm_mount 00:06:48.940 ************************************ 00:06:48.940 10:03:22 -- setup/devices.sh@1 -- # cleanup 00:06:48.940 10:03:22 -- setup/devices.sh@11 -- # cleanup_nvme 00:06:48.940 10:03:22 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:48.940 10:03:22 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:48.940 10:03:22 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:48.940 10:03:22 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:48.940 10:03:22 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:49.199 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:49.199 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:06:49.199 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:49.199 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:49.199 10:03:22 -- setup/devices.sh@12 -- # cleanup_dm 00:06:49.199 10:03:22 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:49.199 10:03:22 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:49.199 10:03:22 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:49.199 10:03:22 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:49.199 10:03:22 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:49.199 10:03:22 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:49.199 00:06:49.199 real 0m23.445s 00:06:49.199 user 0m6.478s 00:06:49.199 sys 0m11.687s 00:06:49.199 10:03:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.199 10:03:22 -- common/autotest_common.sh@10 -- # set +x 00:06:49.199 ************************************ 00:06:49.199 END TEST devices 00:06:49.199 ************************************ 00:06:49.199 00:06:49.199 real 1m19.583s 00:06:49.199 user 0m25.777s 00:06:49.199 sys 0m44.141s 00:06:49.199 10:03:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.199 10:03:22 -- common/autotest_common.sh@10 -- # set +x 00:06:49.199 ************************************ 00:06:49.199 END TEST setup.sh 00:06:49.199 ************************************ 00:06:49.199 10:03:22 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:51.824 Hugepages 00:06:51.824 node hugesize free / total 00:06:51.824 node0 1048576kB 0 / 0 00:06:51.824 node0 2048kB 2048 / 2048 00:06:51.824 node1 1048576kB 0 / 0 00:06:51.824 node1 2048kB 0 / 0 00:06:51.824 00:06:51.824 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:52.083 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:06:52.083 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:06:52.083 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:06:52.083 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:06:52.083 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:06:52.083 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:06:52.083 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:06:52.083 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:06:52.083 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:06:52.083 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:06:52.083 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:06:52.083 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:06:52.083 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:06:52.083 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:06:52.083 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:06:52.083 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:06:52.083 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:06:52.083 10:03:25 -- spdk/autotest.sh@141 -- # uname -s 00:06:52.083 10:03:25 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:06:52.083 10:03:25 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:06:52.083 10:03:25 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:55.375 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:55.375 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:55.375 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:55.375 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:55.375 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:55.375 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:55.375 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:55.375 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:55.375 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:55.375 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:55.375 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:55.375 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:55.375 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:55.375 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:55.375 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:55.375 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:55.944 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:06:55.944 10:03:29 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:56.881 10:03:30 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:56.881 10:03:30 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:56.881 10:03:30 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:06:57.140 10:03:30 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:06:57.140 10:03:30 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:57.140 10:03:30 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:57.140 10:03:30 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:57.140 10:03:30 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:57.140 10:03:30 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:57.140 10:03:30 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:57.140 10:03:30 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:86:00.0 00:06:57.140 10:03:30 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:00.429 Waiting for block devices as requested 00:07:00.429 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:07:00.429 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:00.429 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:00.429 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:00.429 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:00.429 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:00.429 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:00.429 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:00.688 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:00.688 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:00.688 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:00.946 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:00.946 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:00.946 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:00.946 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:01.205 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:01.205 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:01.205 10:03:34 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:07:01.205 10:03:34 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:86:00.0 00:07:01.205 10:03:34 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:07:01.205 10:03:34 -- common/autotest_common.sh@1487 -- # grep 0000:86:00.0/nvme/nvme 00:07:01.205 10:03:34 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:07:01.205 10:03:34 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 ]] 00:07:01.205 10:03:34 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:07:01.205 10:03:34 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:07:01.205 10:03:34 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:07:01.205 10:03:34 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:07:01.205 10:03:34 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:07:01.205 10:03:34 -- common/autotest_common.sh@1530 -- # grep oacs 00:07:01.205 10:03:34 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:07:01.205 10:03:34 -- common/autotest_common.sh@1530 -- # oacs=' 0xe' 00:07:01.205 10:03:34 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:07:01.205 10:03:34 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:07:01.205 10:03:34 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:07:01.205 10:03:34 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:07:01.205 10:03:34 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:07:01.463 10:03:34 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:07:01.463 10:03:34 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:07:01.463 10:03:34 -- common/autotest_common.sh@1542 -- # continue 00:07:01.463 10:03:34 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:07:01.463 10:03:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:01.463 10:03:34 -- common/autotest_common.sh@10 -- # set +x 00:07:01.463 10:03:34 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:07:01.463 10:03:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:01.463 10:03:34 -- common/autotest_common.sh@10 -- # set +x 00:07:01.463 10:03:34 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:04.754 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:04.754 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:04.754 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:04.754 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:04.754 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:04.754 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:04.754 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:04.754 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:04.754 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:04.755 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:04.755 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:04.755 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:04.755 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:04.755 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:04.755 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:04.755 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:05.323 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:07:05.323 10:03:38 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:07:05.323 10:03:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:05.323 10:03:38 -- common/autotest_common.sh@10 -- # set +x 00:07:05.323 10:03:38 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:07:05.323 10:03:38 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:07:05.323 10:03:38 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:07:05.323 10:03:38 -- common/autotest_common.sh@1562 -- # bdfs=() 00:07:05.323 10:03:38 -- common/autotest_common.sh@1562 -- # local bdfs 00:07:05.323 10:03:38 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:07:05.323 10:03:38 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:05.323 10:03:38 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:05.323 10:03:38 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:05.323 10:03:38 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:05.323 10:03:38 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:05.583 10:03:38 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:07:05.583 10:03:38 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:86:00.0 00:07:05.583 10:03:38 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:07:05.583 10:03:38 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:86:00.0/device 00:07:05.583 10:03:38 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:07:05.583 10:03:38 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:07:05.583 10:03:38 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:07:05.583 10:03:38 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:86:00.0 00:07:05.583 10:03:38 -- common/autotest_common.sh@1577 -- # [[ -z 0000:86:00.0 ]] 00:07:05.583 10:03:38 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3264456 00:07:05.583 10:03:38 -- common/autotest_common.sh@1583 -- # waitforlisten 3264456 00:07:05.583 10:03:38 -- common/autotest_common.sh@819 -- # '[' -z 3264456 ']' 00:07:05.583 10:03:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.583 10:03:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:05.583 10:03:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.583 10:03:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:05.583 10:03:38 -- common/autotest_common.sh@10 -- # set +x 00:07:05.583 10:03:38 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:05.583 [2024-04-17 10:03:38.742796] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:05.583 [2024-04-17 10:03:38.742850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3264456 ] 00:07:05.583 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.583 [2024-04-17 10:03:38.821537] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.583 [2024-04-17 10:03:38.908551] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:05.583 [2024-04-17 10:03:38.908708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.520 10:03:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:06.520 10:03:39 -- common/autotest_common.sh@852 -- # return 0 00:07:06.520 10:03:39 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:07:06.520 10:03:39 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:07:06.520 10:03:39 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:86:00.0 00:07:09.809 nvme0n1 00:07:09.809 10:03:42 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:07:09.809 [2024-04-17 10:03:42.966681] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:07:09.809 request: 00:07:09.809 { 00:07:09.809 "nvme_ctrlr_name": "nvme0", 00:07:09.809 "password": "test", 00:07:09.809 "method": "bdev_nvme_opal_revert", 00:07:09.809 "req_id": 1 00:07:09.809 } 00:07:09.809 Got JSON-RPC error response 00:07:09.809 response: 00:07:09.809 { 00:07:09.809 "code": -32602, 00:07:09.809 "message": "Invalid parameters" 00:07:09.809 } 00:07:09.809 10:03:42 -- common/autotest_common.sh@1589 -- # true 00:07:09.809 10:03:42 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:07:09.809 10:03:42 -- common/autotest_common.sh@1593 -- # killprocess 3264456 00:07:09.809 10:03:42 -- common/autotest_common.sh@926 -- # '[' -z 3264456 ']' 00:07:09.809 10:03:42 -- common/autotest_common.sh@930 -- # kill -0 3264456 00:07:09.809 10:03:42 -- common/autotest_common.sh@931 -- # uname 00:07:09.809 10:03:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:09.809 10:03:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3264456 00:07:09.809 10:03:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:09.809 10:03:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:09.809 10:03:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3264456' 00:07:09.809 killing process with pid 3264456 00:07:09.809 10:03:43 -- common/autotest_common.sh@945 -- # kill 3264456 00:07:09.809 10:03:43 -- common/autotest_common.sh@950 -- # wait 3264456 00:07:11.712 10:03:44 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:07:11.712 10:03:44 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:07:11.712 10:03:44 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:07:11.712 10:03:44 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:07:11.712 10:03:44 -- spdk/autotest.sh@173 -- # timing_enter lib 00:07:11.712 10:03:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:11.712 10:03:44 -- common/autotest_common.sh@10 -- # set +x 00:07:11.712 10:03:44 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:11.712 10:03:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:11.712 10:03:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.712 10:03:44 -- common/autotest_common.sh@10 -- # set +x 00:07:11.712 ************************************ 00:07:11.712 START TEST env 00:07:11.712 ************************************ 00:07:11.712 10:03:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:11.712 * Looking for test storage... 00:07:11.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:07:11.712 10:03:44 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:11.712 10:03:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:11.712 10:03:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.712 10:03:44 -- common/autotest_common.sh@10 -- # set +x 00:07:11.712 ************************************ 00:07:11.712 START TEST env_memory 00:07:11.712 ************************************ 00:07:11.712 10:03:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:11.712 00:07:11.712 00:07:11.712 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.712 http://cunit.sourceforge.net/ 00:07:11.712 00:07:11.712 00:07:11.712 Suite: memory 00:07:11.712 Test: alloc and free memory map ...[2024-04-17 10:03:44.903543] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:11.712 passed 00:07:11.712 Test: mem map translation ...[2024-04-17 10:03:44.932793] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:11.712 [2024-04-17 10:03:44.932815] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:11.712 [2024-04-17 10:03:44.932867] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:11.712 [2024-04-17 10:03:44.932877] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:11.712 passed 00:07:11.712 Test: mem map registration ...[2024-04-17 10:03:44.993101] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:07:11.712 [2024-04-17 10:03:44.993121] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:07:11.712 passed 00:07:11.972 Test: mem map adjacent registrations ...passed 00:07:11.972 00:07:11.972 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.972 suites 1 1 n/a 0 0 00:07:11.972 tests 4 4 4 0 0 00:07:11.972 asserts 152 152 152 0 n/a 00:07:11.972 00:07:11.972 Elapsed time = 0.207 seconds 00:07:11.972 00:07:11.972 real 0m0.219s 00:07:11.972 user 0m0.205s 00:07:11.972 sys 0m0.014s 00:07:11.972 10:03:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.972 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:07:11.972 ************************************ 00:07:11.972 END TEST env_memory 00:07:11.972 ************************************ 00:07:11.972 10:03:45 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:11.972 10:03:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:11.972 10:03:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.972 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:07:11.972 ************************************ 00:07:11.972 START TEST env_vtophys 00:07:11.972 ************************************ 00:07:11.972 10:03:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:11.972 EAL: lib.eal log level changed from notice to debug 00:07:11.972 EAL: Detected lcore 0 as core 0 on socket 0 00:07:11.972 EAL: Detected lcore 1 as core 1 on socket 0 00:07:11.972 EAL: Detected lcore 2 as core 2 on socket 0 00:07:11.972 EAL: Detected lcore 3 as core 3 on socket 0 00:07:11.972 EAL: Detected lcore 4 as core 4 on socket 0 00:07:11.972 EAL: Detected lcore 5 as core 5 on socket 0 00:07:11.972 EAL: Detected lcore 6 as core 6 on socket 0 00:07:11.972 EAL: Detected lcore 7 as core 8 on socket 0 00:07:11.972 EAL: Detected lcore 8 as core 9 on socket 0 00:07:11.972 EAL: Detected lcore 9 as core 10 on socket 0 00:07:11.972 EAL: Detected lcore 10 as core 11 on socket 0 00:07:11.972 EAL: Detected lcore 11 as core 12 on socket 0 00:07:11.972 EAL: Detected lcore 12 as core 13 on socket 0 00:07:11.972 EAL: Detected lcore 13 as core 14 on socket 0 00:07:11.972 EAL: Detected lcore 14 as core 16 on socket 0 00:07:11.972 EAL: Detected lcore 15 as core 17 on socket 0 00:07:11.972 EAL: Detected lcore 16 as core 18 on socket 0 00:07:11.972 EAL: Detected lcore 17 as core 19 on socket 0 00:07:11.972 EAL: Detected lcore 18 as core 20 on socket 0 00:07:11.972 EAL: Detected lcore 19 as core 21 on socket 0 00:07:11.972 EAL: Detected lcore 20 as core 22 on socket 0 00:07:11.972 EAL: Detected lcore 21 as core 24 on socket 0 00:07:11.972 EAL: Detected lcore 22 as core 25 on socket 0 00:07:11.972 EAL: Detected lcore 23 as core 26 on socket 0 00:07:11.972 EAL: Detected lcore 24 as core 27 on socket 0 00:07:11.972 EAL: Detected lcore 25 as core 28 on socket 0 00:07:11.972 EAL: Detected lcore 26 as core 29 on socket 0 00:07:11.972 EAL: Detected lcore 27 as core 30 on socket 0 00:07:11.972 EAL: Detected lcore 28 as core 0 on socket 1 00:07:11.972 EAL: Detected lcore 29 as core 1 on socket 1 00:07:11.972 EAL: Detected lcore 30 as core 2 on socket 1 00:07:11.972 EAL: Detected lcore 31 as core 3 on socket 1 00:07:11.972 EAL: Detected lcore 32 as core 4 on socket 1 00:07:11.972 EAL: Detected lcore 33 as core 5 on socket 1 00:07:11.972 EAL: Detected lcore 34 as core 6 on socket 1 00:07:11.972 EAL: Detected lcore 35 as core 8 on socket 1 00:07:11.972 EAL: Detected lcore 36 as core 9 on socket 1 00:07:11.972 EAL: Detected lcore 37 as core 10 on socket 1 00:07:11.972 EAL: Detected lcore 38 as core 11 on socket 1 00:07:11.972 EAL: Detected lcore 39 as core 12 on socket 1 00:07:11.972 EAL: Detected lcore 40 as core 13 on socket 1 00:07:11.972 EAL: Detected lcore 41 as core 14 on socket 1 00:07:11.972 EAL: Detected lcore 42 as core 16 on socket 1 00:07:11.972 EAL: Detected lcore 43 as core 17 on socket 1 00:07:11.972 EAL: Detected lcore 44 as core 18 on socket 1 00:07:11.972 EAL: Detected lcore 45 as core 19 on socket 1 00:07:11.972 EAL: Detected lcore 46 as core 20 on socket 1 00:07:11.972 EAL: Detected lcore 47 as core 21 on socket 1 00:07:11.972 EAL: Detected lcore 48 as core 22 on socket 1 00:07:11.973 EAL: Detected lcore 49 as core 24 on socket 1 00:07:11.973 EAL: Detected lcore 50 as core 25 on socket 1 00:07:11.973 EAL: Detected lcore 51 as core 26 on socket 1 00:07:11.973 EAL: Detected lcore 52 as core 27 on socket 1 00:07:11.973 EAL: Detected lcore 53 as core 28 on socket 1 00:07:11.973 EAL: Detected lcore 54 as core 29 on socket 1 00:07:11.973 EAL: Detected lcore 55 as core 30 on socket 1 00:07:11.973 EAL: Detected lcore 56 as core 0 on socket 0 00:07:11.973 EAL: Detected lcore 57 as core 1 on socket 0 00:07:11.973 EAL: Detected lcore 58 as core 2 on socket 0 00:07:11.973 EAL: Detected lcore 59 as core 3 on socket 0 00:07:11.973 EAL: Detected lcore 60 as core 4 on socket 0 00:07:11.973 EAL: Detected lcore 61 as core 5 on socket 0 00:07:11.973 EAL: Detected lcore 62 as core 6 on socket 0 00:07:11.973 EAL: Detected lcore 63 as core 8 on socket 0 00:07:11.973 EAL: Detected lcore 64 as core 9 on socket 0 00:07:11.973 EAL: Detected lcore 65 as core 10 on socket 0 00:07:11.973 EAL: Detected lcore 66 as core 11 on socket 0 00:07:11.973 EAL: Detected lcore 67 as core 12 on socket 0 00:07:11.973 EAL: Detected lcore 68 as core 13 on socket 0 00:07:11.973 EAL: Detected lcore 69 as core 14 on socket 0 00:07:11.973 EAL: Detected lcore 70 as core 16 on socket 0 00:07:11.973 EAL: Detected lcore 71 as core 17 on socket 0 00:07:11.973 EAL: Detected lcore 72 as core 18 on socket 0 00:07:11.973 EAL: Detected lcore 73 as core 19 on socket 0 00:07:11.973 EAL: Detected lcore 74 as core 20 on socket 0 00:07:11.973 EAL: Detected lcore 75 as core 21 on socket 0 00:07:11.973 EAL: Detected lcore 76 as core 22 on socket 0 00:07:11.973 EAL: Detected lcore 77 as core 24 on socket 0 00:07:11.973 EAL: Detected lcore 78 as core 25 on socket 0 00:07:11.973 EAL: Detected lcore 79 as core 26 on socket 0 00:07:11.973 EAL: Detected lcore 80 as core 27 on socket 0 00:07:11.973 EAL: Detected lcore 81 as core 28 on socket 0 00:07:11.973 EAL: Detected lcore 82 as core 29 on socket 0 00:07:11.973 EAL: Detected lcore 83 as core 30 on socket 0 00:07:11.973 EAL: Detected lcore 84 as core 0 on socket 1 00:07:11.973 EAL: Detected lcore 85 as core 1 on socket 1 00:07:11.973 EAL: Detected lcore 86 as core 2 on socket 1 00:07:11.973 EAL: Detected lcore 87 as core 3 on socket 1 00:07:11.973 EAL: Detected lcore 88 as core 4 on socket 1 00:07:11.973 EAL: Detected lcore 89 as core 5 on socket 1 00:07:11.973 EAL: Detected lcore 90 as core 6 on socket 1 00:07:11.973 EAL: Detected lcore 91 as core 8 on socket 1 00:07:11.973 EAL: Detected lcore 92 as core 9 on socket 1 00:07:11.973 EAL: Detected lcore 93 as core 10 on socket 1 00:07:11.973 EAL: Detected lcore 94 as core 11 on socket 1 00:07:11.973 EAL: Detected lcore 95 as core 12 on socket 1 00:07:11.973 EAL: Detected lcore 96 as core 13 on socket 1 00:07:11.973 EAL: Detected lcore 97 as core 14 on socket 1 00:07:11.973 EAL: Detected lcore 98 as core 16 on socket 1 00:07:11.973 EAL: Detected lcore 99 as core 17 on socket 1 00:07:11.973 EAL: Detected lcore 100 as core 18 on socket 1 00:07:11.973 EAL: Detected lcore 101 as core 19 on socket 1 00:07:11.973 EAL: Detected lcore 102 as core 20 on socket 1 00:07:11.973 EAL: Detected lcore 103 as core 21 on socket 1 00:07:11.973 EAL: Detected lcore 104 as core 22 on socket 1 00:07:11.973 EAL: Detected lcore 105 as core 24 on socket 1 00:07:11.973 EAL: Detected lcore 106 as core 25 on socket 1 00:07:11.973 EAL: Detected lcore 107 as core 26 on socket 1 00:07:11.973 EAL: Detected lcore 108 as core 27 on socket 1 00:07:11.973 EAL: Detected lcore 109 as core 28 on socket 1 00:07:11.973 EAL: Detected lcore 110 as core 29 on socket 1 00:07:11.973 EAL: Detected lcore 111 as core 30 on socket 1 00:07:11.973 EAL: Maximum logical cores by configuration: 128 00:07:11.973 EAL: Detected CPU lcores: 112 00:07:11.973 EAL: Detected NUMA nodes: 2 00:07:11.973 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:07:11.973 EAL: Detected shared linkage of DPDK 00:07:11.973 EAL: No shared files mode enabled, IPC will be disabled 00:07:11.973 EAL: Bus pci wants IOVA as 'DC' 00:07:11.973 EAL: Buses did not request a specific IOVA mode. 00:07:11.973 EAL: IOMMU is available, selecting IOVA as VA mode. 00:07:11.973 EAL: Selected IOVA mode 'VA' 00:07:11.973 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.973 EAL: Probing VFIO support... 00:07:11.973 EAL: IOMMU type 1 (Type 1) is supported 00:07:11.973 EAL: IOMMU type 7 (sPAPR) is not supported 00:07:11.973 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:07:11.973 EAL: VFIO support initialized 00:07:11.973 EAL: Ask a virtual area of 0x2e000 bytes 00:07:11.973 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:11.973 EAL: Setting up physically contiguous memory... 00:07:11.973 EAL: Setting maximum number of open files to 524288 00:07:11.973 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:11.973 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:07:11.973 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:11.973 EAL: Ask a virtual area of 0x61000 bytes 00:07:11.973 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:11.973 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:11.973 EAL: Ask a virtual area of 0x400000000 bytes 00:07:11.973 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:11.973 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:11.973 EAL: Ask a virtual area of 0x61000 bytes 00:07:11.973 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:11.973 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:11.973 EAL: Ask a virtual area of 0x400000000 bytes 00:07:11.973 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:11.973 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:11.973 EAL: Ask a virtual area of 0x61000 bytes 00:07:11.973 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:11.973 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:11.973 EAL: Ask a virtual area of 0x400000000 bytes 00:07:11.973 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:11.973 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:11.973 EAL: Ask a virtual area of 0x61000 bytes 00:07:11.973 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:11.973 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:11.973 EAL: Ask a virtual area of 0x400000000 bytes 00:07:11.973 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:11.973 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:11.973 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:07:11.973 EAL: Ask a virtual area of 0x61000 bytes 00:07:11.973 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:07:11.973 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:11.973 EAL: Ask a virtual area of 0x400000000 bytes 00:07:11.973 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:07:11.973 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:07:11.973 EAL: Ask a virtual area of 0x61000 bytes 00:07:11.973 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:07:11.973 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:11.973 EAL: Ask a virtual area of 0x400000000 bytes 00:07:11.973 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:07:11.973 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:07:11.973 EAL: Ask a virtual area of 0x61000 bytes 00:07:11.973 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:07:11.973 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:11.973 EAL: Ask a virtual area of 0x400000000 bytes 00:07:11.973 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:07:11.973 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:07:11.973 EAL: Ask a virtual area of 0x61000 bytes 00:07:11.973 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:07:11.973 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:11.973 EAL: Ask a virtual area of 0x400000000 bytes 00:07:11.973 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:07:11.973 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:07:11.973 EAL: Hugepages will be freed exactly as allocated. 00:07:11.973 EAL: No shared files mode enabled, IPC is disabled 00:07:11.973 EAL: No shared files mode enabled, IPC is disabled 00:07:11.973 EAL: TSC frequency is ~2200000 KHz 00:07:11.973 EAL: Main lcore 0 is ready (tid=7fdc5ebbba00;cpuset=[0]) 00:07:11.973 EAL: Trying to obtain current memory policy. 00:07:11.973 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.973 EAL: Restoring previous memory policy: 0 00:07:11.973 EAL: request: mp_malloc_sync 00:07:11.973 EAL: No shared files mode enabled, IPC is disabled 00:07:11.973 EAL: Heap on socket 0 was expanded by 2MB 00:07:11.973 EAL: No shared files mode enabled, IPC is disabled 00:07:11.973 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:11.973 EAL: Mem event callback 'spdk:(nil)' registered 00:07:11.973 00:07:11.973 00:07:11.973 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.973 http://cunit.sourceforge.net/ 00:07:11.973 00:07:11.973 00:07:11.973 Suite: components_suite 00:07:11.973 Test: vtophys_malloc_test ...passed 00:07:11.973 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:11.973 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.973 EAL: Restoring previous memory policy: 4 00:07:11.973 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.973 EAL: request: mp_malloc_sync 00:07:11.973 EAL: No shared files mode enabled, IPC is disabled 00:07:11.973 EAL: Heap on socket 0 was expanded by 4MB 00:07:11.973 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.973 EAL: request: mp_malloc_sync 00:07:11.973 EAL: No shared files mode enabled, IPC is disabled 00:07:11.973 EAL: Heap on socket 0 was shrunk by 4MB 00:07:11.973 EAL: Trying to obtain current memory policy. 00:07:11.973 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.973 EAL: Restoring previous memory policy: 4 00:07:11.973 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.973 EAL: request: mp_malloc_sync 00:07:11.973 EAL: No shared files mode enabled, IPC is disabled 00:07:11.973 EAL: Heap on socket 0 was expanded by 6MB 00:07:11.973 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.973 EAL: request: mp_malloc_sync 00:07:11.973 EAL: No shared files mode enabled, IPC is disabled 00:07:11.973 EAL: Heap on socket 0 was shrunk by 6MB 00:07:11.973 EAL: Trying to obtain current memory policy. 00:07:11.973 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.973 EAL: Restoring previous memory policy: 4 00:07:11.973 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.973 EAL: request: mp_malloc_sync 00:07:11.973 EAL: No shared files mode enabled, IPC is disabled 00:07:11.973 EAL: Heap on socket 0 was expanded by 10MB 00:07:11.973 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.974 EAL: request: mp_malloc_sync 00:07:11.974 EAL: No shared files mode enabled, IPC is disabled 00:07:11.974 EAL: Heap on socket 0 was shrunk by 10MB 00:07:11.974 EAL: Trying to obtain current memory policy. 00:07:11.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.974 EAL: Restoring previous memory policy: 4 00:07:11.974 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.974 EAL: request: mp_malloc_sync 00:07:11.974 EAL: No shared files mode enabled, IPC is disabled 00:07:11.974 EAL: Heap on socket 0 was expanded by 18MB 00:07:11.974 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.974 EAL: request: mp_malloc_sync 00:07:11.974 EAL: No shared files mode enabled, IPC is disabled 00:07:11.974 EAL: Heap on socket 0 was shrunk by 18MB 00:07:11.974 EAL: Trying to obtain current memory policy. 00:07:11.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.974 EAL: Restoring previous memory policy: 4 00:07:11.974 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.974 EAL: request: mp_malloc_sync 00:07:11.974 EAL: No shared files mode enabled, IPC is disabled 00:07:11.974 EAL: Heap on socket 0 was expanded by 34MB 00:07:11.974 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.974 EAL: request: mp_malloc_sync 00:07:11.974 EAL: No shared files mode enabled, IPC is disabled 00:07:11.974 EAL: Heap on socket 0 was shrunk by 34MB 00:07:11.974 EAL: Trying to obtain current memory policy. 00:07:11.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.974 EAL: Restoring previous memory policy: 4 00:07:11.974 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.974 EAL: request: mp_malloc_sync 00:07:11.974 EAL: No shared files mode enabled, IPC is disabled 00:07:11.974 EAL: Heap on socket 0 was expanded by 66MB 00:07:11.974 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.974 EAL: request: mp_malloc_sync 00:07:11.974 EAL: No shared files mode enabled, IPC is disabled 00:07:11.974 EAL: Heap on socket 0 was shrunk by 66MB 00:07:11.974 EAL: Trying to obtain current memory policy. 00:07:11.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.974 EAL: Restoring previous memory policy: 4 00:07:11.974 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.974 EAL: request: mp_malloc_sync 00:07:11.974 EAL: No shared files mode enabled, IPC is disabled 00:07:11.974 EAL: Heap on socket 0 was expanded by 130MB 00:07:11.974 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.233 EAL: request: mp_malloc_sync 00:07:12.233 EAL: No shared files mode enabled, IPC is disabled 00:07:12.233 EAL: Heap on socket 0 was shrunk by 130MB 00:07:12.233 EAL: Trying to obtain current memory policy. 00:07:12.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:12.233 EAL: Restoring previous memory policy: 4 00:07:12.233 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.233 EAL: request: mp_malloc_sync 00:07:12.233 EAL: No shared files mode enabled, IPC is disabled 00:07:12.233 EAL: Heap on socket 0 was expanded by 258MB 00:07:12.233 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.233 EAL: request: mp_malloc_sync 00:07:12.233 EAL: No shared files mode enabled, IPC is disabled 00:07:12.233 EAL: Heap on socket 0 was shrunk by 258MB 00:07:12.233 EAL: Trying to obtain current memory policy. 00:07:12.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:12.233 EAL: Restoring previous memory policy: 4 00:07:12.233 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.233 EAL: request: mp_malloc_sync 00:07:12.233 EAL: No shared files mode enabled, IPC is disabled 00:07:12.233 EAL: Heap on socket 0 was expanded by 514MB 00:07:12.492 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.492 EAL: request: mp_malloc_sync 00:07:12.492 EAL: No shared files mode enabled, IPC is disabled 00:07:12.492 EAL: Heap on socket 0 was shrunk by 514MB 00:07:12.492 EAL: Trying to obtain current memory policy. 00:07:12.492 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:12.751 EAL: Restoring previous memory policy: 4 00:07:12.751 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.751 EAL: request: mp_malloc_sync 00:07:12.751 EAL: No shared files mode enabled, IPC is disabled 00:07:12.751 EAL: Heap on socket 0 was expanded by 1026MB 00:07:13.010 EAL: Calling mem event callback 'spdk:(nil)' 00:07:13.010 EAL: request: mp_malloc_sync 00:07:13.010 EAL: No shared files mode enabled, IPC is disabled 00:07:13.010 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:13.010 passed 00:07:13.010 00:07:13.010 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.010 suites 1 1 n/a 0 0 00:07:13.010 tests 2 2 2 0 0 00:07:13.010 asserts 497 497 497 0 n/a 00:07:13.010 00:07:13.010 Elapsed time = 1.020 seconds 00:07:13.010 EAL: Calling mem event callback 'spdk:(nil)' 00:07:13.010 EAL: request: mp_malloc_sync 00:07:13.010 EAL: No shared files mode enabled, IPC is disabled 00:07:13.010 EAL: Heap on socket 0 was shrunk by 2MB 00:07:13.010 EAL: No shared files mode enabled, IPC is disabled 00:07:13.010 EAL: No shared files mode enabled, IPC is disabled 00:07:13.010 EAL: No shared files mode enabled, IPC is disabled 00:07:13.010 00:07:13.010 real 0m1.156s 00:07:13.010 user 0m0.661s 00:07:13.010 sys 0m0.466s 00:07:13.010 10:03:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.010 10:03:46 -- common/autotest_common.sh@10 -- # set +x 00:07:13.010 ************************************ 00:07:13.010 END TEST env_vtophys 00:07:13.010 ************************************ 00:07:13.010 10:03:46 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:13.010 10:03:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:13.010 10:03:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.010 10:03:46 -- common/autotest_common.sh@10 -- # set +x 00:07:13.010 ************************************ 00:07:13.010 START TEST env_pci 00:07:13.010 ************************************ 00:07:13.010 10:03:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:13.010 00:07:13.010 00:07:13.010 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.010 http://cunit.sourceforge.net/ 00:07:13.010 00:07:13.010 00:07:13.010 Suite: pci 00:07:13.010 Test: pci_hook ...[2024-04-17 10:03:46.326534] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3265963 has claimed it 00:07:13.269 EAL: Cannot find device (10000:00:01.0) 00:07:13.269 EAL: Failed to attach device on primary process 00:07:13.269 passed 00:07:13.269 00:07:13.269 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.269 suites 1 1 n/a 0 0 00:07:13.269 tests 1 1 1 0 0 00:07:13.269 asserts 25 25 25 0 n/a 00:07:13.269 00:07:13.269 Elapsed time = 0.030 seconds 00:07:13.269 00:07:13.269 real 0m0.050s 00:07:13.269 user 0m0.019s 00:07:13.269 sys 0m0.030s 00:07:13.269 10:03:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.269 10:03:46 -- common/autotest_common.sh@10 -- # set +x 00:07:13.269 ************************************ 00:07:13.269 END TEST env_pci 00:07:13.269 ************************************ 00:07:13.269 10:03:46 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:13.269 10:03:46 -- env/env.sh@15 -- # uname 00:07:13.269 10:03:46 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:13.269 10:03:46 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:13.269 10:03:46 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:13.269 10:03:46 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:07:13.269 10:03:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.269 10:03:46 -- common/autotest_common.sh@10 -- # set +x 00:07:13.270 ************************************ 00:07:13.270 START TEST env_dpdk_post_init 00:07:13.270 ************************************ 00:07:13.270 10:03:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:13.270 EAL: Detected CPU lcores: 112 00:07:13.270 EAL: Detected NUMA nodes: 2 00:07:13.270 EAL: Detected shared linkage of DPDK 00:07:13.270 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:13.270 EAL: Selected IOVA mode 'VA' 00:07:13.270 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.270 EAL: VFIO support initialized 00:07:13.270 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:13.270 EAL: Using IOMMU type 1 (Type 1) 00:07:13.270 EAL: Ignore mapping IO port bar(1) 00:07:13.270 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:07:13.270 EAL: Ignore mapping IO port bar(1) 00:07:13.270 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:07:13.270 EAL: Ignore mapping IO port bar(1) 00:07:13.270 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:07:13.270 EAL: Ignore mapping IO port bar(1) 00:07:13.270 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:07:13.529 EAL: Ignore mapping IO port bar(1) 00:07:13.529 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:07:13.529 EAL: Ignore mapping IO port bar(1) 00:07:13.529 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:07:13.529 EAL: Ignore mapping IO port bar(1) 00:07:13.529 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:07:13.529 EAL: Ignore mapping IO port bar(1) 00:07:13.529 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:07:13.529 EAL: Ignore mapping IO port bar(1) 00:07:13.529 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:07:13.529 EAL: Ignore mapping IO port bar(1) 00:07:13.529 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:07:13.529 EAL: Ignore mapping IO port bar(1) 00:07:13.529 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:07:13.529 EAL: Ignore mapping IO port bar(1) 00:07:13.529 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:07:13.529 EAL: Ignore mapping IO port bar(1) 00:07:13.529 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:07:13.529 EAL: Ignore mapping IO port bar(1) 00:07:13.529 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:07:13.529 EAL: Ignore mapping IO port bar(1) 00:07:13.529 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:07:13.529 EAL: Ignore mapping IO port bar(1) 00:07:13.529 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:07:14.465 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:86:00.0 (socket 1) 00:07:17.763 EAL: Releasing PCI mapped resource for 0000:86:00.0 00:07:17.763 EAL: Calling pci_unmap_resource for 0000:86:00.0 at 0x202001040000 00:07:17.763 Starting DPDK initialization... 00:07:17.763 Starting SPDK post initialization... 00:07:17.763 SPDK NVMe probe 00:07:17.763 Attaching to 0000:86:00.0 00:07:17.763 Attached to 0000:86:00.0 00:07:17.763 Cleaning up... 00:07:17.763 00:07:17.763 real 0m4.462s 00:07:17.763 user 0m3.357s 00:07:17.763 sys 0m0.161s 00:07:17.763 10:03:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.763 10:03:50 -- common/autotest_common.sh@10 -- # set +x 00:07:17.763 ************************************ 00:07:17.763 END TEST env_dpdk_post_init 00:07:17.763 ************************************ 00:07:17.763 10:03:50 -- env/env.sh@26 -- # uname 00:07:17.763 10:03:50 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:17.763 10:03:50 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:17.763 10:03:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:17.763 10:03:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.763 10:03:50 -- common/autotest_common.sh@10 -- # set +x 00:07:17.763 ************************************ 00:07:17.763 START TEST env_mem_callbacks 00:07:17.763 ************************************ 00:07:17.763 10:03:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:17.763 EAL: Detected CPU lcores: 112 00:07:17.763 EAL: Detected NUMA nodes: 2 00:07:17.763 EAL: Detected shared linkage of DPDK 00:07:17.763 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:17.763 EAL: Selected IOVA mode 'VA' 00:07:17.764 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.764 EAL: VFIO support initialized 00:07:17.764 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:17.764 00:07:17.764 00:07:17.764 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.764 http://cunit.sourceforge.net/ 00:07:17.764 00:07:17.764 00:07:17.764 Suite: memory 00:07:17.764 Test: test ... 00:07:17.764 register 0x200000200000 2097152 00:07:17.764 malloc 3145728 00:07:17.764 register 0x200000400000 4194304 00:07:17.764 buf 0x200000500000 len 3145728 PASSED 00:07:17.764 malloc 64 00:07:17.764 buf 0x2000004fff40 len 64 PASSED 00:07:17.764 malloc 4194304 00:07:17.764 register 0x200000800000 6291456 00:07:17.764 buf 0x200000a00000 len 4194304 PASSED 00:07:17.764 free 0x200000500000 3145728 00:07:17.764 free 0x2000004fff40 64 00:07:17.764 unregister 0x200000400000 4194304 PASSED 00:07:17.764 free 0x200000a00000 4194304 00:07:17.764 unregister 0x200000800000 6291456 PASSED 00:07:17.764 malloc 8388608 00:07:17.764 register 0x200000400000 10485760 00:07:17.764 buf 0x200000600000 len 8388608 PASSED 00:07:17.764 free 0x200000600000 8388608 00:07:17.764 unregister 0x200000400000 10485760 PASSED 00:07:17.764 passed 00:07:17.764 00:07:17.764 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.764 suites 1 1 n/a 0 0 00:07:17.764 tests 1 1 1 0 0 00:07:17.764 asserts 15 15 15 0 n/a 00:07:17.764 00:07:17.764 Elapsed time = 0.007 seconds 00:07:17.764 00:07:17.764 real 0m0.063s 00:07:17.764 user 0m0.021s 00:07:17.764 sys 0m0.042s 00:07:17.764 10:03:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.764 10:03:50 -- common/autotest_common.sh@10 -- # set +x 00:07:17.764 ************************************ 00:07:17.764 END TEST env_mem_callbacks 00:07:17.764 ************************************ 00:07:17.764 00:07:17.764 real 0m6.245s 00:07:17.764 user 0m4.370s 00:07:17.764 sys 0m0.940s 00:07:17.764 10:03:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.764 10:03:51 -- common/autotest_common.sh@10 -- # set +x 00:07:17.764 ************************************ 00:07:17.764 END TEST env 00:07:17.764 ************************************ 00:07:17.764 10:03:51 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:17.764 10:03:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:17.764 10:03:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.764 10:03:51 -- common/autotest_common.sh@10 -- # set +x 00:07:17.764 ************************************ 00:07:17.764 START TEST rpc 00:07:17.764 ************************************ 00:07:17.764 10:03:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:18.023 * Looking for test storage... 00:07:18.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:18.023 10:03:51 -- rpc/rpc.sh@65 -- # spdk_pid=3266887 00:07:18.023 10:03:51 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:18.023 10:03:51 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:07:18.023 10:03:51 -- rpc/rpc.sh@67 -- # waitforlisten 3266887 00:07:18.023 10:03:51 -- common/autotest_common.sh@819 -- # '[' -z 3266887 ']' 00:07:18.023 10:03:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.023 10:03:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:18.023 10:03:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.023 10:03:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:18.023 10:03:51 -- common/autotest_common.sh@10 -- # set +x 00:07:18.023 [2024-04-17 10:03:51.184604] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:18.023 [2024-04-17 10:03:51.184678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3266887 ] 00:07:18.023 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.023 [2024-04-17 10:03:51.267336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.023 [2024-04-17 10:03:51.354064] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:18.023 [2024-04-17 10:03:51.354204] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:18.023 [2024-04-17 10:03:51.354216] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3266887' to capture a snapshot of events at runtime. 00:07:18.023 [2024-04-17 10:03:51.354227] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3266887 for offline analysis/debug. 00:07:18.023 [2024-04-17 10:03:51.354249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.961 10:03:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:18.961 10:03:52 -- common/autotest_common.sh@852 -- # return 0 00:07:18.961 10:03:52 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:18.961 10:03:52 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:18.961 10:03:52 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:18.961 10:03:52 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:18.961 10:03:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:18.961 10:03:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.961 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:18.961 ************************************ 00:07:18.961 START TEST rpc_integrity 00:07:18.961 ************************************ 00:07:18.961 10:03:52 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:07:18.961 10:03:52 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:18.961 10:03:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.961 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:18.961 10:03:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.961 10:03:52 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:18.961 10:03:52 -- rpc/rpc.sh@13 -- # jq length 00:07:18.961 10:03:52 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:18.961 10:03:52 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:18.961 10:03:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.961 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:18.961 10:03:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.961 10:03:52 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:18.961 10:03:52 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:18.961 10:03:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.961 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:18.961 10:03:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.961 10:03:52 -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:18.961 { 00:07:18.961 "name": "Malloc0", 00:07:18.961 "aliases": [ 00:07:18.961 "f9ab2ae5-1c09-4e5c-98d5-0da3689768f0" 00:07:18.961 ], 00:07:18.961 "product_name": "Malloc disk", 00:07:18.961 "block_size": 512, 00:07:18.961 "num_blocks": 16384, 00:07:18.961 "uuid": "f9ab2ae5-1c09-4e5c-98d5-0da3689768f0", 00:07:18.961 "assigned_rate_limits": { 00:07:18.961 "rw_ios_per_sec": 0, 00:07:18.961 "rw_mbytes_per_sec": 0, 00:07:18.961 "r_mbytes_per_sec": 0, 00:07:18.961 "w_mbytes_per_sec": 0 00:07:18.961 }, 00:07:18.961 "claimed": false, 00:07:18.961 "zoned": false, 00:07:18.961 "supported_io_types": { 00:07:18.961 "read": true, 00:07:18.961 "write": true, 00:07:18.961 "unmap": true, 00:07:18.961 "write_zeroes": true, 00:07:18.961 "flush": true, 00:07:18.961 "reset": true, 00:07:18.961 "compare": false, 00:07:18.961 "compare_and_write": false, 00:07:18.961 "abort": true, 00:07:18.961 "nvme_admin": false, 00:07:18.961 "nvme_io": false 00:07:18.961 }, 00:07:18.961 "memory_domains": [ 00:07:18.961 { 00:07:18.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.961 "dma_device_type": 2 00:07:18.961 } 00:07:18.961 ], 00:07:18.961 "driver_specific": {} 00:07:18.961 } 00:07:18.961 ]' 00:07:18.961 10:03:52 -- rpc/rpc.sh@17 -- # jq length 00:07:18.961 10:03:52 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:18.961 10:03:52 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:18.961 10:03:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.961 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:18.961 [2024-04-17 10:03:52.249694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:18.961 [2024-04-17 10:03:52.249733] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:18.961 [2024-04-17 10:03:52.249751] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb77ac0 00:07:18.961 [2024-04-17 10:03:52.249760] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:18.961 [2024-04-17 10:03:52.251283] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:18.961 [2024-04-17 10:03:52.251308] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:18.961 Passthru0 00:07:18.961 10:03:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.961 10:03:52 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:18.961 10:03:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.961 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:18.961 10:03:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.961 10:03:52 -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:18.961 { 00:07:18.961 "name": "Malloc0", 00:07:18.961 "aliases": [ 00:07:18.961 "f9ab2ae5-1c09-4e5c-98d5-0da3689768f0" 00:07:18.961 ], 00:07:18.961 "product_name": "Malloc disk", 00:07:18.961 "block_size": 512, 00:07:18.961 "num_blocks": 16384, 00:07:18.961 "uuid": "f9ab2ae5-1c09-4e5c-98d5-0da3689768f0", 00:07:18.961 "assigned_rate_limits": { 00:07:18.961 "rw_ios_per_sec": 0, 00:07:18.961 "rw_mbytes_per_sec": 0, 00:07:18.961 "r_mbytes_per_sec": 0, 00:07:18.961 "w_mbytes_per_sec": 0 00:07:18.961 }, 00:07:18.961 "claimed": true, 00:07:18.961 "claim_type": "exclusive_write", 00:07:18.961 "zoned": false, 00:07:18.961 "supported_io_types": { 00:07:18.961 "read": true, 00:07:18.961 "write": true, 00:07:18.961 "unmap": true, 00:07:18.961 "write_zeroes": true, 00:07:18.961 "flush": true, 00:07:18.961 "reset": true, 00:07:18.961 "compare": false, 00:07:18.961 "compare_and_write": false, 00:07:18.962 "abort": true, 00:07:18.962 "nvme_admin": false, 00:07:18.962 "nvme_io": false 00:07:18.962 }, 00:07:18.962 "memory_domains": [ 00:07:18.962 { 00:07:18.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.962 "dma_device_type": 2 00:07:18.962 } 00:07:18.962 ], 00:07:18.962 "driver_specific": {} 00:07:18.962 }, 00:07:18.962 { 00:07:18.962 "name": "Passthru0", 00:07:18.962 "aliases": [ 00:07:18.962 "80eb3fb5-9bef-5302-a193-0552d03940b3" 00:07:18.962 ], 00:07:18.962 "product_name": "passthru", 00:07:18.962 "block_size": 512, 00:07:18.962 "num_blocks": 16384, 00:07:18.962 "uuid": "80eb3fb5-9bef-5302-a193-0552d03940b3", 00:07:18.962 "assigned_rate_limits": { 00:07:18.962 "rw_ios_per_sec": 0, 00:07:18.962 "rw_mbytes_per_sec": 0, 00:07:18.962 "r_mbytes_per_sec": 0, 00:07:18.962 "w_mbytes_per_sec": 0 00:07:18.962 }, 00:07:18.962 "claimed": false, 00:07:18.962 "zoned": false, 00:07:18.962 "supported_io_types": { 00:07:18.962 "read": true, 00:07:18.962 "write": true, 00:07:18.962 "unmap": true, 00:07:18.962 "write_zeroes": true, 00:07:18.962 "flush": true, 00:07:18.962 "reset": true, 00:07:18.962 "compare": false, 00:07:18.962 "compare_and_write": false, 00:07:18.962 "abort": true, 00:07:18.962 "nvme_admin": false, 00:07:18.962 "nvme_io": false 00:07:18.962 }, 00:07:18.962 "memory_domains": [ 00:07:18.962 { 00:07:18.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.962 "dma_device_type": 2 00:07:18.962 } 00:07:18.962 ], 00:07:18.962 "driver_specific": { 00:07:18.962 "passthru": { 00:07:18.962 "name": "Passthru0", 00:07:18.962 "base_bdev_name": "Malloc0" 00:07:18.962 } 00:07:18.962 } 00:07:18.962 } 00:07:18.962 ]' 00:07:18.962 10:03:52 -- rpc/rpc.sh@21 -- # jq length 00:07:19.221 10:03:52 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:19.221 10:03:52 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:19.221 10:03:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.221 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:19.221 10:03:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.221 10:03:52 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:19.221 10:03:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.221 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:19.221 10:03:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.221 10:03:52 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:19.221 10:03:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.221 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:19.221 10:03:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.221 10:03:52 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:19.221 10:03:52 -- rpc/rpc.sh@26 -- # jq length 00:07:19.221 10:03:52 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:19.221 00:07:19.221 real 0m0.283s 00:07:19.221 user 0m0.190s 00:07:19.221 sys 0m0.029s 00:07:19.221 10:03:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.221 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:19.221 ************************************ 00:07:19.221 END TEST rpc_integrity 00:07:19.222 ************************************ 00:07:19.222 10:03:52 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:19.222 10:03:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:19.222 10:03:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:19.222 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:19.222 ************************************ 00:07:19.222 START TEST rpc_plugins 00:07:19.222 ************************************ 00:07:19.222 10:03:52 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:07:19.222 10:03:52 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:19.222 10:03:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.222 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:19.222 10:03:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.222 10:03:52 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:19.222 10:03:52 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:19.222 10:03:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.222 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:19.222 10:03:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.222 10:03:52 -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:19.222 { 00:07:19.222 "name": "Malloc1", 00:07:19.222 "aliases": [ 00:07:19.222 "cf5240c4-24f8-441d-9863-eb9da9810492" 00:07:19.222 ], 00:07:19.222 "product_name": "Malloc disk", 00:07:19.222 "block_size": 4096, 00:07:19.222 "num_blocks": 256, 00:07:19.222 "uuid": "cf5240c4-24f8-441d-9863-eb9da9810492", 00:07:19.222 "assigned_rate_limits": { 00:07:19.222 "rw_ios_per_sec": 0, 00:07:19.222 "rw_mbytes_per_sec": 0, 00:07:19.222 "r_mbytes_per_sec": 0, 00:07:19.222 "w_mbytes_per_sec": 0 00:07:19.222 }, 00:07:19.222 "claimed": false, 00:07:19.222 "zoned": false, 00:07:19.222 "supported_io_types": { 00:07:19.222 "read": true, 00:07:19.222 "write": true, 00:07:19.222 "unmap": true, 00:07:19.222 "write_zeroes": true, 00:07:19.222 "flush": true, 00:07:19.222 "reset": true, 00:07:19.222 "compare": false, 00:07:19.222 "compare_and_write": false, 00:07:19.222 "abort": true, 00:07:19.222 "nvme_admin": false, 00:07:19.222 "nvme_io": false 00:07:19.222 }, 00:07:19.222 "memory_domains": [ 00:07:19.222 { 00:07:19.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.222 "dma_device_type": 2 00:07:19.222 } 00:07:19.222 ], 00:07:19.222 "driver_specific": {} 00:07:19.222 } 00:07:19.222 ]' 00:07:19.222 10:03:52 -- rpc/rpc.sh@32 -- # jq length 00:07:19.222 10:03:52 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:19.222 10:03:52 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:19.222 10:03:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.222 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:19.222 10:03:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.222 10:03:52 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:19.222 10:03:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.222 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:19.222 10:03:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.222 10:03:52 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:19.222 10:03:52 -- rpc/rpc.sh@36 -- # jq length 00:07:19.481 10:03:52 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:19.481 00:07:19.481 real 0m0.148s 00:07:19.481 user 0m0.093s 00:07:19.481 sys 0m0.019s 00:07:19.481 10:03:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.481 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:19.481 ************************************ 00:07:19.481 END TEST rpc_plugins 00:07:19.481 ************************************ 00:07:19.481 10:03:52 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:19.481 10:03:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:19.481 10:03:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:19.481 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:19.481 ************************************ 00:07:19.481 START TEST rpc_trace_cmd_test 00:07:19.481 ************************************ 00:07:19.481 10:03:52 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:07:19.481 10:03:52 -- rpc/rpc.sh@40 -- # local info 00:07:19.481 10:03:52 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:19.481 10:03:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.481 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:19.481 10:03:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.481 10:03:52 -- rpc/rpc.sh@42 -- # info='{ 00:07:19.481 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3266887", 00:07:19.481 "tpoint_group_mask": "0x8", 00:07:19.481 "iscsi_conn": { 00:07:19.481 "mask": "0x2", 00:07:19.481 "tpoint_mask": "0x0" 00:07:19.481 }, 00:07:19.481 "scsi": { 00:07:19.481 "mask": "0x4", 00:07:19.481 "tpoint_mask": "0x0" 00:07:19.481 }, 00:07:19.481 "bdev": { 00:07:19.481 "mask": "0x8", 00:07:19.481 "tpoint_mask": "0xffffffffffffffff" 00:07:19.481 }, 00:07:19.481 "nvmf_rdma": { 00:07:19.481 "mask": "0x10", 00:07:19.481 "tpoint_mask": "0x0" 00:07:19.481 }, 00:07:19.481 "nvmf_tcp": { 00:07:19.481 "mask": "0x20", 00:07:19.481 "tpoint_mask": "0x0" 00:07:19.481 }, 00:07:19.481 "ftl": { 00:07:19.481 "mask": "0x40", 00:07:19.481 "tpoint_mask": "0x0" 00:07:19.481 }, 00:07:19.481 "blobfs": { 00:07:19.481 "mask": "0x80", 00:07:19.481 "tpoint_mask": "0x0" 00:07:19.481 }, 00:07:19.481 "dsa": { 00:07:19.481 "mask": "0x200", 00:07:19.481 "tpoint_mask": "0x0" 00:07:19.481 }, 00:07:19.481 "thread": { 00:07:19.481 "mask": "0x400", 00:07:19.481 "tpoint_mask": "0x0" 00:07:19.481 }, 00:07:19.481 "nvme_pcie": { 00:07:19.481 "mask": "0x800", 00:07:19.481 "tpoint_mask": "0x0" 00:07:19.481 }, 00:07:19.481 "iaa": { 00:07:19.481 "mask": "0x1000", 00:07:19.481 "tpoint_mask": "0x0" 00:07:19.481 }, 00:07:19.481 "nvme_tcp": { 00:07:19.481 "mask": "0x2000", 00:07:19.481 "tpoint_mask": "0x0" 00:07:19.481 }, 00:07:19.481 "bdev_nvme": { 00:07:19.481 "mask": "0x4000", 00:07:19.481 "tpoint_mask": "0x0" 00:07:19.481 } 00:07:19.481 }' 00:07:19.481 10:03:52 -- rpc/rpc.sh@43 -- # jq length 00:07:19.481 10:03:52 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:07:19.481 10:03:52 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:19.481 10:03:52 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:19.481 10:03:52 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:19.481 10:03:52 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:19.481 10:03:52 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:19.740 10:03:52 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:19.740 10:03:52 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:19.740 10:03:52 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:19.740 00:07:19.740 real 0m0.248s 00:07:19.740 user 0m0.217s 00:07:19.740 sys 0m0.022s 00:07:19.740 10:03:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.740 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:19.740 ************************************ 00:07:19.740 END TEST rpc_trace_cmd_test 00:07:19.740 ************************************ 00:07:19.740 10:03:52 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:19.740 10:03:52 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:19.740 10:03:52 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:19.740 10:03:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:19.740 10:03:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:19.740 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:19.740 ************************************ 00:07:19.740 START TEST rpc_daemon_integrity 00:07:19.740 ************************************ 00:07:19.740 10:03:52 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:07:19.740 10:03:52 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:19.740 10:03:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.740 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:19.740 10:03:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.740 10:03:52 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:19.740 10:03:52 -- rpc/rpc.sh@13 -- # jq length 00:07:19.740 10:03:52 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:19.740 10:03:52 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:19.740 10:03:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.740 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:19.740 10:03:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.740 10:03:52 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:19.740 10:03:52 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:19.740 10:03:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.740 10:03:52 -- common/autotest_common.sh@10 -- # set +x 00:07:19.740 10:03:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.740 10:03:53 -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:19.740 { 00:07:19.740 "name": "Malloc2", 00:07:19.740 "aliases": [ 00:07:19.740 "9756a4a3-4d37-4fff-9bc5-6f6ef13128d0" 00:07:19.740 ], 00:07:19.740 "product_name": "Malloc disk", 00:07:19.740 "block_size": 512, 00:07:19.740 "num_blocks": 16384, 00:07:19.740 "uuid": "9756a4a3-4d37-4fff-9bc5-6f6ef13128d0", 00:07:19.740 "assigned_rate_limits": { 00:07:19.740 "rw_ios_per_sec": 0, 00:07:19.740 "rw_mbytes_per_sec": 0, 00:07:19.740 "r_mbytes_per_sec": 0, 00:07:19.740 "w_mbytes_per_sec": 0 00:07:19.740 }, 00:07:19.740 "claimed": false, 00:07:19.740 "zoned": false, 00:07:19.740 "supported_io_types": { 00:07:19.740 "read": true, 00:07:19.740 "write": true, 00:07:19.741 "unmap": true, 00:07:19.741 "write_zeroes": true, 00:07:19.741 "flush": true, 00:07:19.741 "reset": true, 00:07:19.741 "compare": false, 00:07:19.741 "compare_and_write": false, 00:07:19.741 "abort": true, 00:07:19.741 "nvme_admin": false, 00:07:19.741 "nvme_io": false 00:07:19.741 }, 00:07:19.741 "memory_domains": [ 00:07:19.741 { 00:07:19.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.741 "dma_device_type": 2 00:07:19.741 } 00:07:19.741 ], 00:07:19.741 "driver_specific": {} 00:07:19.741 } 00:07:19.741 ]' 00:07:19.741 10:03:53 -- rpc/rpc.sh@17 -- # jq length 00:07:19.741 10:03:53 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:19.741 10:03:53 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:19.741 10:03:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.741 10:03:53 -- common/autotest_common.sh@10 -- # set +x 00:07:19.741 [2024-04-17 10:03:53.051989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:19.741 [2024-04-17 10:03:53.052026] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:19.741 [2024-04-17 10:03:53.052042] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc0d730 00:07:19.741 [2024-04-17 10:03:53.052051] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:19.741 [2024-04-17 10:03:53.053416] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:19.741 [2024-04-17 10:03:53.053442] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:19.741 Passthru0 00:07:19.741 10:03:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.741 10:03:53 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:19.741 10:03:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.741 10:03:53 -- common/autotest_common.sh@10 -- # set +x 00:07:20.000 10:03:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:20.000 10:03:53 -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:20.000 { 00:07:20.000 "name": "Malloc2", 00:07:20.000 "aliases": [ 00:07:20.000 "9756a4a3-4d37-4fff-9bc5-6f6ef13128d0" 00:07:20.000 ], 00:07:20.000 "product_name": "Malloc disk", 00:07:20.000 "block_size": 512, 00:07:20.000 "num_blocks": 16384, 00:07:20.000 "uuid": "9756a4a3-4d37-4fff-9bc5-6f6ef13128d0", 00:07:20.000 "assigned_rate_limits": { 00:07:20.000 "rw_ios_per_sec": 0, 00:07:20.000 "rw_mbytes_per_sec": 0, 00:07:20.000 "r_mbytes_per_sec": 0, 00:07:20.000 "w_mbytes_per_sec": 0 00:07:20.000 }, 00:07:20.000 "claimed": true, 00:07:20.000 "claim_type": "exclusive_write", 00:07:20.000 "zoned": false, 00:07:20.000 "supported_io_types": { 00:07:20.000 "read": true, 00:07:20.000 "write": true, 00:07:20.000 "unmap": true, 00:07:20.000 "write_zeroes": true, 00:07:20.000 "flush": true, 00:07:20.000 "reset": true, 00:07:20.000 "compare": false, 00:07:20.000 "compare_and_write": false, 00:07:20.000 "abort": true, 00:07:20.000 "nvme_admin": false, 00:07:20.000 "nvme_io": false 00:07:20.000 }, 00:07:20.000 "memory_domains": [ 00:07:20.000 { 00:07:20.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.000 "dma_device_type": 2 00:07:20.000 } 00:07:20.000 ], 00:07:20.000 "driver_specific": {} 00:07:20.000 }, 00:07:20.000 { 00:07:20.000 "name": "Passthru0", 00:07:20.000 "aliases": [ 00:07:20.000 "bde5b0b7-9c8e-5619-a50b-f16b2a2eea18" 00:07:20.000 ], 00:07:20.000 "product_name": "passthru", 00:07:20.000 "block_size": 512, 00:07:20.000 "num_blocks": 16384, 00:07:20.000 "uuid": "bde5b0b7-9c8e-5619-a50b-f16b2a2eea18", 00:07:20.000 "assigned_rate_limits": { 00:07:20.000 "rw_ios_per_sec": 0, 00:07:20.000 "rw_mbytes_per_sec": 0, 00:07:20.000 "r_mbytes_per_sec": 0, 00:07:20.000 "w_mbytes_per_sec": 0 00:07:20.000 }, 00:07:20.000 "claimed": false, 00:07:20.000 "zoned": false, 00:07:20.000 "supported_io_types": { 00:07:20.000 "read": true, 00:07:20.000 "write": true, 00:07:20.000 "unmap": true, 00:07:20.000 "write_zeroes": true, 00:07:20.000 "flush": true, 00:07:20.000 "reset": true, 00:07:20.000 "compare": false, 00:07:20.000 "compare_and_write": false, 00:07:20.000 "abort": true, 00:07:20.000 "nvme_admin": false, 00:07:20.000 "nvme_io": false 00:07:20.000 }, 00:07:20.000 "memory_domains": [ 00:07:20.000 { 00:07:20.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.000 "dma_device_type": 2 00:07:20.000 } 00:07:20.000 ], 00:07:20.000 "driver_specific": { 00:07:20.000 "passthru": { 00:07:20.000 "name": "Passthru0", 00:07:20.000 "base_bdev_name": "Malloc2" 00:07:20.000 } 00:07:20.000 } 00:07:20.000 } 00:07:20.000 ]' 00:07:20.000 10:03:53 -- rpc/rpc.sh@21 -- # jq length 00:07:20.000 10:03:53 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:20.000 10:03:53 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:20.000 10:03:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:20.000 10:03:53 -- common/autotest_common.sh@10 -- # set +x 00:07:20.000 10:03:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:20.000 10:03:53 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:20.000 10:03:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:20.000 10:03:53 -- common/autotest_common.sh@10 -- # set +x 00:07:20.000 10:03:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:20.000 10:03:53 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:20.000 10:03:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:20.000 10:03:53 -- common/autotest_common.sh@10 -- # set +x 00:07:20.000 10:03:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:20.000 10:03:53 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:20.000 10:03:53 -- rpc/rpc.sh@26 -- # jq length 00:07:20.000 10:03:53 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:20.000 00:07:20.000 real 0m0.294s 00:07:20.000 user 0m0.187s 00:07:20.001 sys 0m0.039s 00:07:20.001 10:03:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.001 10:03:53 -- common/autotest_common.sh@10 -- # set +x 00:07:20.001 ************************************ 00:07:20.001 END TEST rpc_daemon_integrity 00:07:20.001 ************************************ 00:07:20.001 10:03:53 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:20.001 10:03:53 -- rpc/rpc.sh@84 -- # killprocess 3266887 00:07:20.001 10:03:53 -- common/autotest_common.sh@926 -- # '[' -z 3266887 ']' 00:07:20.001 10:03:53 -- common/autotest_common.sh@930 -- # kill -0 3266887 00:07:20.001 10:03:53 -- common/autotest_common.sh@931 -- # uname 00:07:20.001 10:03:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:20.001 10:03:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3266887 00:07:20.001 10:03:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:20.001 10:03:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:20.001 10:03:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3266887' 00:07:20.001 killing process with pid 3266887 00:07:20.001 10:03:53 -- common/autotest_common.sh@945 -- # kill 3266887 00:07:20.001 10:03:53 -- common/autotest_common.sh@950 -- # wait 3266887 00:07:20.570 00:07:20.570 real 0m2.594s 00:07:20.570 user 0m3.402s 00:07:20.570 sys 0m0.664s 00:07:20.570 10:03:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.570 10:03:53 -- common/autotest_common.sh@10 -- # set +x 00:07:20.570 ************************************ 00:07:20.570 END TEST rpc 00:07:20.570 ************************************ 00:07:20.570 10:03:53 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:20.570 10:03:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:20.570 10:03:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.570 10:03:53 -- common/autotest_common.sh@10 -- # set +x 00:07:20.570 ************************************ 00:07:20.570 START TEST rpc_client 00:07:20.570 ************************************ 00:07:20.570 10:03:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:20.570 * Looking for test storage... 00:07:20.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:07:20.570 10:03:53 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:20.570 OK 00:07:20.570 10:03:53 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:20.570 00:07:20.570 real 0m0.105s 00:07:20.570 user 0m0.047s 00:07:20.570 sys 0m0.066s 00:07:20.570 10:03:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.570 10:03:53 -- common/autotest_common.sh@10 -- # set +x 00:07:20.570 ************************************ 00:07:20.570 END TEST rpc_client 00:07:20.570 ************************************ 00:07:20.570 10:03:53 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:20.570 10:03:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:20.570 10:03:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.570 10:03:53 -- common/autotest_common.sh@10 -- # set +x 00:07:20.570 ************************************ 00:07:20.570 START TEST json_config 00:07:20.570 ************************************ 00:07:20.570 10:03:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:20.570 10:03:53 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:20.570 10:03:53 -- nvmf/common.sh@7 -- # uname -s 00:07:20.570 10:03:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.570 10:03:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.570 10:03:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.570 10:03:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.570 10:03:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.570 10:03:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.570 10:03:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.570 10:03:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.570 10:03:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.570 10:03:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.570 10:03:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:20.570 10:03:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:20.570 10:03:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.570 10:03:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.570 10:03:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:20.570 10:03:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:20.570 10:03:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.570 10:03:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.570 10:03:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.570 10:03:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.570 10:03:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.570 10:03:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.570 10:03:53 -- paths/export.sh@5 -- # export PATH 00:07:20.570 10:03:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.830 10:03:53 -- nvmf/common.sh@46 -- # : 0 00:07:20.830 10:03:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:20.830 10:03:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:20.830 10:03:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:20.830 10:03:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.830 10:03:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.830 10:03:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:20.830 10:03:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:20.830 10:03:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:20.830 10:03:53 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:07:20.830 10:03:53 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:07:20.830 10:03:53 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:07:20.830 10:03:53 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:20.830 10:03:53 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:07:20.830 10:03:53 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:07:20.830 10:03:53 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:20.830 10:03:53 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:07:20.830 10:03:53 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:20.830 10:03:53 -- json_config/json_config.sh@32 -- # declare -A app_params 00:07:20.830 10:03:53 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:07:20.830 10:03:53 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:07:20.830 10:03:53 -- json_config/json_config.sh@43 -- # last_event_id=0 00:07:20.830 10:03:53 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:20.830 10:03:53 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:07:20.830 INFO: JSON configuration test init 00:07:20.830 10:03:53 -- json_config/json_config.sh@420 -- # json_config_test_init 00:07:20.830 10:03:53 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:07:20.830 10:03:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:20.830 10:03:53 -- common/autotest_common.sh@10 -- # set +x 00:07:20.830 10:03:53 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:07:20.830 10:03:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:20.830 10:03:53 -- common/autotest_common.sh@10 -- # set +x 00:07:20.830 10:03:53 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:07:20.830 10:03:53 -- json_config/json_config.sh@98 -- # local app=target 00:07:20.830 10:03:53 -- json_config/json_config.sh@99 -- # shift 00:07:20.830 10:03:53 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:07:20.830 10:03:53 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:07:20.830 10:03:53 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:07:20.830 10:03:53 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:20.830 10:03:53 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:20.830 10:03:53 -- json_config/json_config.sh@111 -- # app_pid[$app]=3267625 00:07:20.830 10:03:53 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:07:20.830 Waiting for target to run... 00:07:20.830 10:03:53 -- json_config/json_config.sh@114 -- # waitforlisten 3267625 /var/tmp/spdk_tgt.sock 00:07:20.830 10:03:53 -- common/autotest_common.sh@819 -- # '[' -z 3267625 ']' 00:07:20.830 10:03:53 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:20.830 10:03:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:20.830 10:03:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:20.830 10:03:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:20.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:20.830 10:03:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:20.830 10:03:53 -- common/autotest_common.sh@10 -- # set +x 00:07:20.830 [2024-04-17 10:03:53.974047] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:20.830 [2024-04-17 10:03:53.974111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3267625 ] 00:07:20.830 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.100 [2024-04-17 10:03:54.278693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.100 [2024-04-17 10:03:54.353994] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:21.100 [2024-04-17 10:03:54.354125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.677 10:03:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:21.677 10:03:54 -- common/autotest_common.sh@852 -- # return 0 00:07:21.677 10:03:54 -- json_config/json_config.sh@115 -- # echo '' 00:07:21.677 00:07:21.677 10:03:54 -- json_config/json_config.sh@322 -- # create_accel_config 00:07:21.677 10:03:54 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:07:21.677 10:03:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:21.677 10:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:21.677 10:03:54 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:07:21.677 10:03:54 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:07:21.677 10:03:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:21.677 10:03:54 -- common/autotest_common.sh@10 -- # set +x 00:07:21.677 10:03:54 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:21.677 10:03:54 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:07:21.677 10:03:54 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:24.967 10:03:58 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:07:24.967 10:03:58 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:07:24.967 10:03:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:24.967 10:03:58 -- common/autotest_common.sh@10 -- # set +x 00:07:24.967 10:03:58 -- json_config/json_config.sh@48 -- # local ret=0 00:07:24.967 10:03:58 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:24.967 10:03:58 -- json_config/json_config.sh@49 -- # local enabled_types 00:07:24.967 10:03:58 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:24.967 10:03:58 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:24.967 10:03:58 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:25.226 10:03:58 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:25.226 10:03:58 -- json_config/json_config.sh@51 -- # local get_types 00:07:25.226 10:03:58 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:07:25.226 10:03:58 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:07:25.226 10:03:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:25.226 10:03:58 -- common/autotest_common.sh@10 -- # set +x 00:07:25.226 10:03:58 -- json_config/json_config.sh@58 -- # return 0 00:07:25.226 10:03:58 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:07:25.226 10:03:58 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:07:25.226 10:03:58 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:07:25.226 10:03:58 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:07:25.226 10:03:58 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:07:25.226 10:03:58 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:07:25.226 10:03:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:25.226 10:03:58 -- common/autotest_common.sh@10 -- # set +x 00:07:25.226 10:03:58 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:25.226 10:03:58 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:07:25.226 10:03:58 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:07:25.226 10:03:58 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:25.226 10:03:58 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:25.485 MallocForNvmf0 00:07:25.485 10:03:58 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:25.485 10:03:58 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:25.744 MallocForNvmf1 00:07:25.744 10:03:58 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:25.744 10:03:58 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:25.744 [2024-04-17 10:03:59.065558] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:26.003 10:03:59 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:26.003 10:03:59 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:26.004 10:03:59 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:26.004 10:03:59 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:26.263 10:03:59 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:26.263 10:03:59 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:26.522 10:03:59 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:26.522 10:03:59 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:26.781 [2024-04-17 10:04:00.016676] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:26.781 10:04:00 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:07:26.781 10:04:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:26.781 10:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:26.781 10:04:00 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:07:26.781 10:04:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:26.781 10:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:26.781 10:04:00 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:07:26.781 10:04:00 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:26.781 10:04:00 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:27.040 MallocBdevForConfigChangeCheck 00:07:27.040 10:04:00 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:07:27.040 10:04:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:27.040 10:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:27.299 10:04:00 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:07:27.299 10:04:00 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:27.558 10:04:00 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:07:27.558 INFO: shutting down applications... 00:07:27.558 10:04:00 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:07:27.558 10:04:00 -- json_config/json_config.sh@431 -- # json_config_clear target 00:07:27.558 10:04:00 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:07:27.558 10:04:00 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:29.463 Calling clear_iscsi_subsystem 00:07:29.463 Calling clear_nvmf_subsystem 00:07:29.463 Calling clear_nbd_subsystem 00:07:29.463 Calling clear_ublk_subsystem 00:07:29.463 Calling clear_vhost_blk_subsystem 00:07:29.463 Calling clear_vhost_scsi_subsystem 00:07:29.463 Calling clear_scheduler_subsystem 00:07:29.463 Calling clear_bdev_subsystem 00:07:29.463 Calling clear_accel_subsystem 00:07:29.463 Calling clear_vmd_subsystem 00:07:29.463 Calling clear_sock_subsystem 00:07:29.463 Calling clear_iobuf_subsystem 00:07:29.463 10:04:02 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:07:29.463 10:04:02 -- json_config/json_config.sh@396 -- # count=100 00:07:29.463 10:04:02 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:07:29.463 10:04:02 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:29.463 10:04:02 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:29.463 10:04:02 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:29.463 10:04:02 -- json_config/json_config.sh@398 -- # break 00:07:29.463 10:04:02 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:07:29.463 10:04:02 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:07:29.463 10:04:02 -- json_config/json_config.sh@120 -- # local app=target 00:07:29.463 10:04:02 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:07:29.463 10:04:02 -- json_config/json_config.sh@124 -- # [[ -n 3267625 ]] 00:07:29.464 10:04:02 -- json_config/json_config.sh@127 -- # kill -SIGINT 3267625 00:07:29.464 10:04:02 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:07:29.464 10:04:02 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:07:29.464 10:04:02 -- json_config/json_config.sh@130 -- # kill -0 3267625 00:07:29.464 10:04:02 -- json_config/json_config.sh@134 -- # sleep 0.5 00:07:30.032 10:04:03 -- json_config/json_config.sh@129 -- # (( i++ )) 00:07:30.032 10:04:03 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:07:30.032 10:04:03 -- json_config/json_config.sh@130 -- # kill -0 3267625 00:07:30.032 10:04:03 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:07:30.032 10:04:03 -- json_config/json_config.sh@132 -- # break 00:07:30.032 10:04:03 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:07:30.032 10:04:03 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:07:30.032 SPDK target shutdown done 00:07:30.032 10:04:03 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:07:30.032 INFO: relaunching applications... 00:07:30.032 10:04:03 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:30.032 10:04:03 -- json_config/json_config.sh@98 -- # local app=target 00:07:30.032 10:04:03 -- json_config/json_config.sh@99 -- # shift 00:07:30.032 10:04:03 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:07:30.032 10:04:03 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:07:30.032 10:04:03 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:07:30.032 10:04:03 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:30.032 10:04:03 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:30.032 10:04:03 -- json_config/json_config.sh@111 -- # app_pid[$app]=3269604 00:07:30.032 10:04:03 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:07:30.032 Waiting for target to run... 00:07:30.032 10:04:03 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:30.032 10:04:03 -- json_config/json_config.sh@114 -- # waitforlisten 3269604 /var/tmp/spdk_tgt.sock 00:07:30.032 10:04:03 -- common/autotest_common.sh@819 -- # '[' -z 3269604 ']' 00:07:30.032 10:04:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:30.032 10:04:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:30.032 10:04:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:30.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:30.032 10:04:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:30.032 10:04:03 -- common/autotest_common.sh@10 -- # set +x 00:07:30.032 [2024-04-17 10:04:03.307280] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:30.032 [2024-04-17 10:04:03.307353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3269604 ] 00:07:30.032 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.657 [2024-04-17 10:04:03.759725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.657 [2024-04-17 10:04:03.859774] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:30.657 [2024-04-17 10:04:03.859919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.950 [2024-04-17 10:04:06.897883] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.950 [2024-04-17 10:04:06.930278] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:33.950 10:04:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:33.950 10:04:07 -- common/autotest_common.sh@852 -- # return 0 00:07:33.950 10:04:07 -- json_config/json_config.sh@115 -- # echo '' 00:07:33.950 00:07:33.950 10:04:07 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:07:33.950 10:04:07 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:33.950 INFO: Checking if target configuration is the same... 00:07:33.950 10:04:07 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:33.950 10:04:07 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:07:33.950 10:04:07 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:33.950 + '[' 2 -ne 2 ']' 00:07:33.950 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:33.950 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:33.950 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:33.950 +++ basename /dev/fd/62 00:07:33.950 ++ mktemp /tmp/62.XXX 00:07:33.950 + tmp_file_1=/tmp/62.xuO 00:07:33.950 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:33.950 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:33.950 + tmp_file_2=/tmp/spdk_tgt_config.json.dB9 00:07:33.950 + ret=0 00:07:33.950 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:34.208 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:34.467 + diff -u /tmp/62.xuO /tmp/spdk_tgt_config.json.dB9 00:07:34.467 + echo 'INFO: JSON config files are the same' 00:07:34.467 INFO: JSON config files are the same 00:07:34.467 + rm /tmp/62.xuO /tmp/spdk_tgt_config.json.dB9 00:07:34.467 + exit 0 00:07:34.467 10:04:07 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:07:34.467 10:04:07 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:34.467 INFO: changing configuration and checking if this can be detected... 00:07:34.467 10:04:07 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:34.467 10:04:07 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:34.726 10:04:07 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:34.726 10:04:07 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:07:34.726 10:04:07 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:34.726 + '[' 2 -ne 2 ']' 00:07:34.726 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:34.726 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:34.726 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:34.726 +++ basename /dev/fd/62 00:07:34.726 ++ mktemp /tmp/62.XXX 00:07:34.726 + tmp_file_1=/tmp/62.ivR 00:07:34.726 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:34.726 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:34.726 + tmp_file_2=/tmp/spdk_tgt_config.json.IqD 00:07:34.726 + ret=0 00:07:34.726 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:34.984 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:34.984 + diff -u /tmp/62.ivR /tmp/spdk_tgt_config.json.IqD 00:07:34.984 + ret=1 00:07:34.984 + echo '=== Start of file: /tmp/62.ivR ===' 00:07:34.984 + cat /tmp/62.ivR 00:07:34.984 + echo '=== End of file: /tmp/62.ivR ===' 00:07:34.984 + echo '' 00:07:34.984 + echo '=== Start of file: /tmp/spdk_tgt_config.json.IqD ===' 00:07:34.984 + cat /tmp/spdk_tgt_config.json.IqD 00:07:34.984 + echo '=== End of file: /tmp/spdk_tgt_config.json.IqD ===' 00:07:34.984 + echo '' 00:07:34.984 + rm /tmp/62.ivR /tmp/spdk_tgt_config.json.IqD 00:07:34.984 + exit 1 00:07:34.984 10:04:08 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:07:34.984 INFO: configuration change detected. 00:07:34.984 10:04:08 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:07:34.984 10:04:08 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:07:34.984 10:04:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:34.984 10:04:08 -- common/autotest_common.sh@10 -- # set +x 00:07:34.984 10:04:08 -- json_config/json_config.sh@360 -- # local ret=0 00:07:34.984 10:04:08 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:07:34.984 10:04:08 -- json_config/json_config.sh@370 -- # [[ -n 3269604 ]] 00:07:34.985 10:04:08 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:07:34.985 10:04:08 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:07:34.985 10:04:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:34.985 10:04:08 -- common/autotest_common.sh@10 -- # set +x 00:07:34.985 10:04:08 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:07:34.985 10:04:08 -- json_config/json_config.sh@246 -- # uname -s 00:07:34.985 10:04:08 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:07:34.985 10:04:08 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:07:34.985 10:04:08 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:07:34.985 10:04:08 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:07:34.985 10:04:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:34.985 10:04:08 -- common/autotest_common.sh@10 -- # set +x 00:07:34.985 10:04:08 -- json_config/json_config.sh@376 -- # killprocess 3269604 00:07:34.985 10:04:08 -- common/autotest_common.sh@926 -- # '[' -z 3269604 ']' 00:07:34.985 10:04:08 -- common/autotest_common.sh@930 -- # kill -0 3269604 00:07:34.985 10:04:08 -- common/autotest_common.sh@931 -- # uname 00:07:34.985 10:04:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:34.985 10:04:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3269604 00:07:35.243 10:04:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:35.243 10:04:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:35.243 10:04:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3269604' 00:07:35.243 killing process with pid 3269604 00:07:35.243 10:04:08 -- common/autotest_common.sh@945 -- # kill 3269604 00:07:35.243 10:04:08 -- common/autotest_common.sh@950 -- # wait 3269604 00:07:37.145 10:04:09 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:37.145 10:04:09 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:07:37.146 10:04:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:37.146 10:04:09 -- common/autotest_common.sh@10 -- # set +x 00:07:37.146 10:04:09 -- json_config/json_config.sh@381 -- # return 0 00:07:37.146 10:04:09 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:07:37.146 INFO: Success 00:07:37.146 00:07:37.146 real 0m16.168s 00:07:37.146 user 0m18.563s 00:07:37.146 sys 0m2.164s 00:07:37.146 10:04:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.146 10:04:09 -- common/autotest_common.sh@10 -- # set +x 00:07:37.146 ************************************ 00:07:37.146 END TEST json_config 00:07:37.146 ************************************ 00:07:37.146 10:04:10 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:37.146 10:04:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:37.146 10:04:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.146 10:04:10 -- common/autotest_common.sh@10 -- # set +x 00:07:37.146 ************************************ 00:07:37.146 START TEST json_config_extra_key 00:07:37.146 ************************************ 00:07:37.146 10:04:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:37.146 10:04:10 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.146 10:04:10 -- nvmf/common.sh@7 -- # uname -s 00:07:37.146 10:04:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.146 10:04:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.146 10:04:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.146 10:04:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.146 10:04:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.146 10:04:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.146 10:04:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.146 10:04:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.146 10:04:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.146 10:04:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.146 10:04:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:37.146 10:04:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:37.146 10:04:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.146 10:04:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.146 10:04:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:37.146 10:04:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.146 10:04:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.146 10:04:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.146 10:04:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.146 10:04:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.146 10:04:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.146 10:04:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.146 10:04:10 -- paths/export.sh@5 -- # export PATH 00:07:37.146 10:04:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.146 10:04:10 -- nvmf/common.sh@46 -- # : 0 00:07:37.146 10:04:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:37.146 10:04:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:37.146 10:04:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:37.146 10:04:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.146 10:04:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.146 10:04:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:37.146 10:04:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:37.146 10:04:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:37.146 10:04:10 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:07:37.146 10:04:10 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:07:37.146 10:04:10 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:37.146 10:04:10 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:07:37.146 10:04:10 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:37.146 10:04:10 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:07:37.146 10:04:10 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:37.146 10:04:10 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:07:37.146 10:04:10 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:37.146 10:04:10 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:07:37.146 INFO: launching applications... 00:07:37.146 10:04:10 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:37.146 10:04:10 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:07:37.146 10:04:10 -- json_config/json_config_extra_key.sh@25 -- # shift 00:07:37.146 10:04:10 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:07:37.146 10:04:10 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:07:37.146 10:04:10 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=3270937 00:07:37.146 10:04:10 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:07:37.146 Waiting for target to run... 00:07:37.146 10:04:10 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 3270937 /var/tmp/spdk_tgt.sock 00:07:37.146 10:04:10 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:37.146 10:04:10 -- common/autotest_common.sh@819 -- # '[' -z 3270937 ']' 00:07:37.146 10:04:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:37.146 10:04:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:37.146 10:04:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:37.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:37.146 10:04:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:37.146 10:04:10 -- common/autotest_common.sh@10 -- # set +x 00:07:37.146 [2024-04-17 10:04:10.173820] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:37.146 [2024-04-17 10:04:10.173888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3270937 ] 00:07:37.146 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.405 [2024-04-17 10:04:10.617674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.405 [2024-04-17 10:04:10.722025] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:37.405 [2024-04-17 10:04:10.722163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.974 10:04:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:37.974 10:04:11 -- common/autotest_common.sh@852 -- # return 0 00:07:37.974 10:04:11 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:07:37.974 00:07:37.974 10:04:11 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:07:37.974 INFO: shutting down applications... 00:07:37.974 10:04:11 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:07:37.974 10:04:11 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:07:37.974 10:04:11 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:07:37.974 10:04:11 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 3270937 ]] 00:07:37.974 10:04:11 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 3270937 00:07:37.974 10:04:11 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:07:37.974 10:04:11 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:37.974 10:04:11 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3270937 00:07:37.974 10:04:11 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:38.542 10:04:11 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:38.542 10:04:11 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:38.543 10:04:11 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3270937 00:07:38.543 10:04:11 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:07:38.543 10:04:11 -- json_config/json_config_extra_key.sh@52 -- # break 00:07:38.543 10:04:11 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:07:38.543 10:04:11 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:07:38.543 SPDK target shutdown done 00:07:38.543 10:04:11 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:07:38.543 Success 00:07:38.543 00:07:38.543 real 0m1.570s 00:07:38.543 user 0m1.362s 00:07:38.543 sys 0m0.543s 00:07:38.543 10:04:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.543 10:04:11 -- common/autotest_common.sh@10 -- # set +x 00:07:38.543 ************************************ 00:07:38.543 END TEST json_config_extra_key 00:07:38.543 ************************************ 00:07:38.543 10:04:11 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:38.543 10:04:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:38.543 10:04:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.543 10:04:11 -- common/autotest_common.sh@10 -- # set +x 00:07:38.543 ************************************ 00:07:38.543 START TEST alias_rpc 00:07:38.543 ************************************ 00:07:38.543 10:04:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:38.543 * Looking for test storage... 00:07:38.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:07:38.543 10:04:11 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:38.543 10:04:11 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3271362 00:07:38.543 10:04:11 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3271362 00:07:38.543 10:04:11 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:38.543 10:04:11 -- common/autotest_common.sh@819 -- # '[' -z 3271362 ']' 00:07:38.543 10:04:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.543 10:04:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:38.543 10:04:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.543 10:04:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:38.543 10:04:11 -- common/autotest_common.sh@10 -- # set +x 00:07:38.543 [2024-04-17 10:04:11.785507] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:38.543 [2024-04-17 10:04:11.785575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3271362 ] 00:07:38.543 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.543 [2024-04-17 10:04:11.865945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.802 [2024-04-17 10:04:11.953264] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:38.802 [2024-04-17 10:04:11.953417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.369 10:04:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:39.369 10:04:12 -- common/autotest_common.sh@852 -- # return 0 00:07:39.369 10:04:12 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:39.629 10:04:12 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3271362 00:07:39.629 10:04:12 -- common/autotest_common.sh@926 -- # '[' -z 3271362 ']' 00:07:39.629 10:04:12 -- common/autotest_common.sh@930 -- # kill -0 3271362 00:07:39.629 10:04:12 -- common/autotest_common.sh@931 -- # uname 00:07:39.629 10:04:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:39.629 10:04:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3271362 00:07:39.629 10:04:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:39.629 10:04:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:39.629 10:04:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3271362' 00:07:39.629 killing process with pid 3271362 00:07:39.629 10:04:12 -- common/autotest_common.sh@945 -- # kill 3271362 00:07:39.629 10:04:12 -- common/autotest_common.sh@950 -- # wait 3271362 00:07:40.198 00:07:40.198 real 0m1.659s 00:07:40.198 user 0m1.881s 00:07:40.198 sys 0m0.437s 00:07:40.198 10:04:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.198 10:04:13 -- common/autotest_common.sh@10 -- # set +x 00:07:40.198 ************************************ 00:07:40.198 END TEST alias_rpc 00:07:40.198 ************************************ 00:07:40.198 10:04:13 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:07:40.198 10:04:13 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:40.198 10:04:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:40.198 10:04:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.198 10:04:13 -- common/autotest_common.sh@10 -- # set +x 00:07:40.198 ************************************ 00:07:40.198 START TEST spdkcli_tcp 00:07:40.198 ************************************ 00:07:40.198 10:04:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:40.198 * Looking for test storage... 00:07:40.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:07:40.198 10:04:13 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:07:40.198 10:04:13 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:40.198 10:04:13 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:07:40.198 10:04:13 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:40.198 10:04:13 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:40.198 10:04:13 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:40.198 10:04:13 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:40.198 10:04:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:40.198 10:04:13 -- common/autotest_common.sh@10 -- # set +x 00:07:40.199 10:04:13 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3271694 00:07:40.199 10:04:13 -- spdkcli/tcp.sh@27 -- # waitforlisten 3271694 00:07:40.199 10:04:13 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:40.199 10:04:13 -- common/autotest_common.sh@819 -- # '[' -z 3271694 ']' 00:07:40.199 10:04:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.199 10:04:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:40.199 10:04:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.199 10:04:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:40.199 10:04:13 -- common/autotest_common.sh@10 -- # set +x 00:07:40.199 [2024-04-17 10:04:13.479266] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:40.199 [2024-04-17 10:04:13.479313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3271694 ] 00:07:40.199 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.458 [2024-04-17 10:04:13.547284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:40.458 [2024-04-17 10:04:13.635539] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:40.458 [2024-04-17 10:04:13.635728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.458 [2024-04-17 10:04:13.635733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.395 10:04:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:41.395 10:04:14 -- common/autotest_common.sh@852 -- # return 0 00:07:41.395 10:04:14 -- spdkcli/tcp.sh@31 -- # socat_pid=3271836 00:07:41.395 10:04:14 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:41.395 10:04:14 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:41.395 [ 00:07:41.395 "bdev_malloc_delete", 00:07:41.395 "bdev_malloc_create", 00:07:41.395 "bdev_null_resize", 00:07:41.395 "bdev_null_delete", 00:07:41.395 "bdev_null_create", 00:07:41.395 "bdev_nvme_cuse_unregister", 00:07:41.395 "bdev_nvme_cuse_register", 00:07:41.395 "bdev_opal_new_user", 00:07:41.395 "bdev_opal_set_lock_state", 00:07:41.395 "bdev_opal_delete", 00:07:41.395 "bdev_opal_get_info", 00:07:41.395 "bdev_opal_create", 00:07:41.395 "bdev_nvme_opal_revert", 00:07:41.395 "bdev_nvme_opal_init", 00:07:41.395 "bdev_nvme_send_cmd", 00:07:41.395 "bdev_nvme_get_path_iostat", 00:07:41.395 "bdev_nvme_get_mdns_discovery_info", 00:07:41.395 "bdev_nvme_stop_mdns_discovery", 00:07:41.395 "bdev_nvme_start_mdns_discovery", 00:07:41.395 "bdev_nvme_set_multipath_policy", 00:07:41.395 "bdev_nvme_set_preferred_path", 00:07:41.395 "bdev_nvme_get_io_paths", 00:07:41.395 "bdev_nvme_remove_error_injection", 00:07:41.395 "bdev_nvme_add_error_injection", 00:07:41.395 "bdev_nvme_get_discovery_info", 00:07:41.395 "bdev_nvme_stop_discovery", 00:07:41.395 "bdev_nvme_start_discovery", 00:07:41.395 "bdev_nvme_get_controller_health_info", 00:07:41.395 "bdev_nvme_disable_controller", 00:07:41.395 "bdev_nvme_enable_controller", 00:07:41.395 "bdev_nvme_reset_controller", 00:07:41.395 "bdev_nvme_get_transport_statistics", 00:07:41.395 "bdev_nvme_apply_firmware", 00:07:41.395 "bdev_nvme_detach_controller", 00:07:41.395 "bdev_nvme_get_controllers", 00:07:41.395 "bdev_nvme_attach_controller", 00:07:41.395 "bdev_nvme_set_hotplug", 00:07:41.395 "bdev_nvme_set_options", 00:07:41.395 "bdev_passthru_delete", 00:07:41.395 "bdev_passthru_create", 00:07:41.395 "bdev_lvol_grow_lvstore", 00:07:41.395 "bdev_lvol_get_lvols", 00:07:41.395 "bdev_lvol_get_lvstores", 00:07:41.395 "bdev_lvol_delete", 00:07:41.395 "bdev_lvol_set_read_only", 00:07:41.395 "bdev_lvol_resize", 00:07:41.395 "bdev_lvol_decouple_parent", 00:07:41.395 "bdev_lvol_inflate", 00:07:41.395 "bdev_lvol_rename", 00:07:41.395 "bdev_lvol_clone_bdev", 00:07:41.395 "bdev_lvol_clone", 00:07:41.395 "bdev_lvol_snapshot", 00:07:41.395 "bdev_lvol_create", 00:07:41.395 "bdev_lvol_delete_lvstore", 00:07:41.395 "bdev_lvol_rename_lvstore", 00:07:41.395 "bdev_lvol_create_lvstore", 00:07:41.395 "bdev_raid_set_options", 00:07:41.395 "bdev_raid_remove_base_bdev", 00:07:41.395 "bdev_raid_add_base_bdev", 00:07:41.395 "bdev_raid_delete", 00:07:41.395 "bdev_raid_create", 00:07:41.395 "bdev_raid_get_bdevs", 00:07:41.395 "bdev_error_inject_error", 00:07:41.395 "bdev_error_delete", 00:07:41.395 "bdev_error_create", 00:07:41.395 "bdev_split_delete", 00:07:41.395 "bdev_split_create", 00:07:41.395 "bdev_delay_delete", 00:07:41.395 "bdev_delay_create", 00:07:41.395 "bdev_delay_update_latency", 00:07:41.395 "bdev_zone_block_delete", 00:07:41.395 "bdev_zone_block_create", 00:07:41.395 "blobfs_create", 00:07:41.395 "blobfs_detect", 00:07:41.395 "blobfs_set_cache_size", 00:07:41.395 "bdev_aio_delete", 00:07:41.395 "bdev_aio_rescan", 00:07:41.395 "bdev_aio_create", 00:07:41.395 "bdev_ftl_set_property", 00:07:41.395 "bdev_ftl_get_properties", 00:07:41.395 "bdev_ftl_get_stats", 00:07:41.395 "bdev_ftl_unmap", 00:07:41.396 "bdev_ftl_unload", 00:07:41.396 "bdev_ftl_delete", 00:07:41.396 "bdev_ftl_load", 00:07:41.396 "bdev_ftl_create", 00:07:41.396 "bdev_virtio_attach_controller", 00:07:41.396 "bdev_virtio_scsi_get_devices", 00:07:41.396 "bdev_virtio_detach_controller", 00:07:41.396 "bdev_virtio_blk_set_hotplug", 00:07:41.396 "bdev_iscsi_delete", 00:07:41.396 "bdev_iscsi_create", 00:07:41.396 "bdev_iscsi_set_options", 00:07:41.396 "accel_error_inject_error", 00:07:41.396 "ioat_scan_accel_module", 00:07:41.396 "dsa_scan_accel_module", 00:07:41.396 "iaa_scan_accel_module", 00:07:41.396 "iscsi_set_options", 00:07:41.396 "iscsi_get_auth_groups", 00:07:41.396 "iscsi_auth_group_remove_secret", 00:07:41.396 "iscsi_auth_group_add_secret", 00:07:41.396 "iscsi_delete_auth_group", 00:07:41.396 "iscsi_create_auth_group", 00:07:41.396 "iscsi_set_discovery_auth", 00:07:41.396 "iscsi_get_options", 00:07:41.396 "iscsi_target_node_request_logout", 00:07:41.396 "iscsi_target_node_set_redirect", 00:07:41.396 "iscsi_target_node_set_auth", 00:07:41.396 "iscsi_target_node_add_lun", 00:07:41.396 "iscsi_get_connections", 00:07:41.396 "iscsi_portal_group_set_auth", 00:07:41.396 "iscsi_start_portal_group", 00:07:41.396 "iscsi_delete_portal_group", 00:07:41.396 "iscsi_create_portal_group", 00:07:41.396 "iscsi_get_portal_groups", 00:07:41.396 "iscsi_delete_target_node", 00:07:41.396 "iscsi_target_node_remove_pg_ig_maps", 00:07:41.396 "iscsi_target_node_add_pg_ig_maps", 00:07:41.396 "iscsi_create_target_node", 00:07:41.396 "iscsi_get_target_nodes", 00:07:41.396 "iscsi_delete_initiator_group", 00:07:41.396 "iscsi_initiator_group_remove_initiators", 00:07:41.396 "iscsi_initiator_group_add_initiators", 00:07:41.396 "iscsi_create_initiator_group", 00:07:41.396 "iscsi_get_initiator_groups", 00:07:41.396 "nvmf_set_crdt", 00:07:41.396 "nvmf_set_config", 00:07:41.396 "nvmf_set_max_subsystems", 00:07:41.396 "nvmf_subsystem_get_listeners", 00:07:41.396 "nvmf_subsystem_get_qpairs", 00:07:41.396 "nvmf_subsystem_get_controllers", 00:07:41.396 "nvmf_get_stats", 00:07:41.396 "nvmf_get_transports", 00:07:41.396 "nvmf_create_transport", 00:07:41.396 "nvmf_get_targets", 00:07:41.396 "nvmf_delete_target", 00:07:41.396 "nvmf_create_target", 00:07:41.396 "nvmf_subsystem_allow_any_host", 00:07:41.396 "nvmf_subsystem_remove_host", 00:07:41.396 "nvmf_subsystem_add_host", 00:07:41.396 "nvmf_subsystem_remove_ns", 00:07:41.396 "nvmf_subsystem_add_ns", 00:07:41.396 "nvmf_subsystem_listener_set_ana_state", 00:07:41.396 "nvmf_discovery_get_referrals", 00:07:41.396 "nvmf_discovery_remove_referral", 00:07:41.396 "nvmf_discovery_add_referral", 00:07:41.396 "nvmf_subsystem_remove_listener", 00:07:41.396 "nvmf_subsystem_add_listener", 00:07:41.396 "nvmf_delete_subsystem", 00:07:41.396 "nvmf_create_subsystem", 00:07:41.396 "nvmf_get_subsystems", 00:07:41.396 "env_dpdk_get_mem_stats", 00:07:41.396 "nbd_get_disks", 00:07:41.396 "nbd_stop_disk", 00:07:41.396 "nbd_start_disk", 00:07:41.396 "ublk_recover_disk", 00:07:41.396 "ublk_get_disks", 00:07:41.396 "ublk_stop_disk", 00:07:41.396 "ublk_start_disk", 00:07:41.396 "ublk_destroy_target", 00:07:41.396 "ublk_create_target", 00:07:41.396 "virtio_blk_create_transport", 00:07:41.396 "virtio_blk_get_transports", 00:07:41.396 "vhost_controller_set_coalescing", 00:07:41.396 "vhost_get_controllers", 00:07:41.396 "vhost_delete_controller", 00:07:41.396 "vhost_create_blk_controller", 00:07:41.396 "vhost_scsi_controller_remove_target", 00:07:41.396 "vhost_scsi_controller_add_target", 00:07:41.396 "vhost_start_scsi_controller", 00:07:41.396 "vhost_create_scsi_controller", 00:07:41.396 "thread_set_cpumask", 00:07:41.396 "framework_get_scheduler", 00:07:41.396 "framework_set_scheduler", 00:07:41.396 "framework_get_reactors", 00:07:41.396 "thread_get_io_channels", 00:07:41.396 "thread_get_pollers", 00:07:41.396 "thread_get_stats", 00:07:41.396 "framework_monitor_context_switch", 00:07:41.396 "spdk_kill_instance", 00:07:41.396 "log_enable_timestamps", 00:07:41.396 "log_get_flags", 00:07:41.396 "log_clear_flag", 00:07:41.396 "log_set_flag", 00:07:41.396 "log_get_level", 00:07:41.396 "log_set_level", 00:07:41.396 "log_get_print_level", 00:07:41.396 "log_set_print_level", 00:07:41.396 "framework_enable_cpumask_locks", 00:07:41.396 "framework_disable_cpumask_locks", 00:07:41.396 "framework_wait_init", 00:07:41.396 "framework_start_init", 00:07:41.396 "scsi_get_devices", 00:07:41.396 "bdev_get_histogram", 00:07:41.396 "bdev_enable_histogram", 00:07:41.396 "bdev_set_qos_limit", 00:07:41.396 "bdev_set_qd_sampling_period", 00:07:41.396 "bdev_get_bdevs", 00:07:41.396 "bdev_reset_iostat", 00:07:41.396 "bdev_get_iostat", 00:07:41.396 "bdev_examine", 00:07:41.396 "bdev_wait_for_examine", 00:07:41.396 "bdev_set_options", 00:07:41.396 "notify_get_notifications", 00:07:41.396 "notify_get_types", 00:07:41.396 "accel_get_stats", 00:07:41.396 "accel_set_options", 00:07:41.396 "accel_set_driver", 00:07:41.396 "accel_crypto_key_destroy", 00:07:41.396 "accel_crypto_keys_get", 00:07:41.396 "accel_crypto_key_create", 00:07:41.396 "accel_assign_opc", 00:07:41.396 "accel_get_module_info", 00:07:41.396 "accel_get_opc_assignments", 00:07:41.396 "vmd_rescan", 00:07:41.396 "vmd_remove_device", 00:07:41.396 "vmd_enable", 00:07:41.396 "sock_set_default_impl", 00:07:41.396 "sock_impl_set_options", 00:07:41.396 "sock_impl_get_options", 00:07:41.396 "iobuf_get_stats", 00:07:41.396 "iobuf_set_options", 00:07:41.396 "framework_get_pci_devices", 00:07:41.396 "framework_get_config", 00:07:41.396 "framework_get_subsystems", 00:07:41.396 "trace_get_info", 00:07:41.396 "trace_get_tpoint_group_mask", 00:07:41.396 "trace_disable_tpoint_group", 00:07:41.396 "trace_enable_tpoint_group", 00:07:41.396 "trace_clear_tpoint_mask", 00:07:41.396 "trace_set_tpoint_mask", 00:07:41.396 "spdk_get_version", 00:07:41.396 "rpc_get_methods" 00:07:41.396 ] 00:07:41.396 10:04:14 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:41.396 10:04:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:41.396 10:04:14 -- common/autotest_common.sh@10 -- # set +x 00:07:41.396 10:04:14 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:41.396 10:04:14 -- spdkcli/tcp.sh@38 -- # killprocess 3271694 00:07:41.396 10:04:14 -- common/autotest_common.sh@926 -- # '[' -z 3271694 ']' 00:07:41.396 10:04:14 -- common/autotest_common.sh@930 -- # kill -0 3271694 00:07:41.396 10:04:14 -- common/autotest_common.sh@931 -- # uname 00:07:41.396 10:04:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:41.396 10:04:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3271694 00:07:41.396 10:04:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:41.396 10:04:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:41.396 10:04:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3271694' 00:07:41.396 killing process with pid 3271694 00:07:41.396 10:04:14 -- common/autotest_common.sh@945 -- # kill 3271694 00:07:41.396 10:04:14 -- common/autotest_common.sh@950 -- # wait 3271694 00:07:41.964 00:07:41.964 real 0m1.682s 00:07:41.964 user 0m3.228s 00:07:41.964 sys 0m0.443s 00:07:41.964 10:04:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.964 10:04:15 -- common/autotest_common.sh@10 -- # set +x 00:07:41.964 ************************************ 00:07:41.964 END TEST spdkcli_tcp 00:07:41.964 ************************************ 00:07:41.964 10:04:15 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:41.964 10:04:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:41.964 10:04:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.964 10:04:15 -- common/autotest_common.sh@10 -- # set +x 00:07:41.964 ************************************ 00:07:41.964 START TEST dpdk_mem_utility 00:07:41.964 ************************************ 00:07:41.964 10:04:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:41.964 * Looking for test storage... 00:07:41.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:07:41.964 10:04:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:41.964 10:04:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3272031 00:07:41.964 10:04:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:41.964 10:04:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3272031 00:07:41.964 10:04:15 -- common/autotest_common.sh@819 -- # '[' -z 3272031 ']' 00:07:41.964 10:04:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.964 10:04:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:41.964 10:04:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.964 10:04:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:41.964 10:04:15 -- common/autotest_common.sh@10 -- # set +x 00:07:41.964 [2024-04-17 10:04:15.203979] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:41.964 [2024-04-17 10:04:15.204042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3272031 ] 00:07:41.964 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.964 [2024-04-17 10:04:15.285907] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.223 [2024-04-17 10:04:15.372797] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:42.223 [2024-04-17 10:04:15.372948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.160 10:04:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:43.160 10:04:16 -- common/autotest_common.sh@852 -- # return 0 00:07:43.160 10:04:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:43.160 10:04:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:43.160 10:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.160 10:04:16 -- common/autotest_common.sh@10 -- # set +x 00:07:43.160 { 00:07:43.160 "filename": "/tmp/spdk_mem_dump.txt" 00:07:43.160 } 00:07:43.160 10:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.160 10:04:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:43.160 DPDK memory size 814.000000 MiB in 1 heap(s) 00:07:43.160 1 heaps totaling size 814.000000 MiB 00:07:43.160 size: 814.000000 MiB heap id: 0 00:07:43.160 end heaps---------- 00:07:43.160 8 mempools totaling size 598.116089 MiB 00:07:43.160 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:43.160 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:43.160 size: 84.521057 MiB name: bdev_io_3272031 00:07:43.160 size: 51.011292 MiB name: evtpool_3272031 00:07:43.160 size: 50.003479 MiB name: msgpool_3272031 00:07:43.160 size: 21.763794 MiB name: PDU_Pool 00:07:43.160 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:43.160 size: 0.026123 MiB name: Session_Pool 00:07:43.160 end mempools------- 00:07:43.160 6 memzones totaling size 4.142822 MiB 00:07:43.160 size: 1.000366 MiB name: RG_ring_0_3272031 00:07:43.160 size: 1.000366 MiB name: RG_ring_1_3272031 00:07:43.160 size: 1.000366 MiB name: RG_ring_4_3272031 00:07:43.160 size: 1.000366 MiB name: RG_ring_5_3272031 00:07:43.160 size: 0.125366 MiB name: RG_ring_2_3272031 00:07:43.160 size: 0.015991 MiB name: RG_ring_3_3272031 00:07:43.160 end memzones------- 00:07:43.160 10:04:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:43.160 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:07:43.160 list of free elements. size: 12.519348 MiB 00:07:43.160 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:43.160 element at address: 0x200018e00000 with size: 0.999878 MiB 00:07:43.160 element at address: 0x200019000000 with size: 0.999878 MiB 00:07:43.160 element at address: 0x200003e00000 with size: 0.996277 MiB 00:07:43.160 element at address: 0x200031c00000 with size: 0.994446 MiB 00:07:43.160 element at address: 0x200013800000 with size: 0.978699 MiB 00:07:43.160 element at address: 0x200007000000 with size: 0.959839 MiB 00:07:43.160 element at address: 0x200019200000 with size: 0.936584 MiB 00:07:43.160 element at address: 0x200000200000 with size: 0.841614 MiB 00:07:43.160 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:07:43.160 element at address: 0x20000b200000 with size: 0.490723 MiB 00:07:43.160 element at address: 0x200000800000 with size: 0.487793 MiB 00:07:43.160 element at address: 0x200019400000 with size: 0.485657 MiB 00:07:43.160 element at address: 0x200027e00000 with size: 0.410034 MiB 00:07:43.160 element at address: 0x200003a00000 with size: 0.355530 MiB 00:07:43.160 list of standard malloc elements. size: 199.218079 MiB 00:07:43.160 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:07:43.160 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:07:43.160 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:43.160 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:07:43.160 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:43.160 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:43.160 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:07:43.160 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:43.160 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:07:43.160 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:07:43.160 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:07:43.160 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:07:43.160 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:43.160 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:43.160 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:43.160 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:43.160 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:07:43.160 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:07:43.161 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:07:43.161 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:07:43.161 element at address: 0x200003adb300 with size: 0.000183 MiB 00:07:43.161 element at address: 0x200003adb500 with size: 0.000183 MiB 00:07:43.161 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:07:43.161 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:43.161 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:43.161 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:43.161 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:07:43.161 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:07:43.161 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:07:43.161 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:07:43.161 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:07:43.161 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:07:43.161 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:07:43.161 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:07:43.161 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:07:43.161 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:07:43.161 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:07:43.161 element at address: 0x200027e69040 with size: 0.000183 MiB 00:07:43.161 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:07:43.161 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:07:43.161 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:07:43.161 list of memzone associated elements. size: 602.262573 MiB 00:07:43.161 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:07:43.161 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:43.161 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:07:43.161 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:43.161 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:07:43.161 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3272031_0 00:07:43.161 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:43.161 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3272031_0 00:07:43.161 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:43.161 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3272031_0 00:07:43.161 element at address: 0x2000195be940 with size: 20.255554 MiB 00:07:43.161 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:43.161 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:07:43.161 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:43.161 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:43.161 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3272031 00:07:43.161 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:43.161 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3272031 00:07:43.161 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:43.161 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3272031 00:07:43.161 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:07:43.161 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:43.161 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:07:43.161 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:43.161 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:07:43.161 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:43.161 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:07:43.161 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:43.161 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:43.161 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3272031 00:07:43.161 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:43.161 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3272031 00:07:43.161 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:07:43.161 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3272031 00:07:43.161 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:07:43.161 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3272031 00:07:43.161 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:07:43.161 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3272031 00:07:43.161 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:07:43.161 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:43.161 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:07:43.161 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:43.161 element at address: 0x20001947c540 with size: 0.250488 MiB 00:07:43.161 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:43.161 element at address: 0x200003adf880 with size: 0.125488 MiB 00:07:43.161 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3272031 00:07:43.161 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:07:43.161 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:43.161 element at address: 0x200027e69100 with size: 0.023743 MiB 00:07:43.161 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:43.161 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:07:43.161 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3272031 00:07:43.161 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:07:43.161 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:43.161 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:07:43.161 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3272031 00:07:43.161 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:07:43.161 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3272031 00:07:43.161 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:07:43.161 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:43.161 10:04:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:43.161 10:04:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3272031 00:07:43.161 10:04:16 -- common/autotest_common.sh@926 -- # '[' -z 3272031 ']' 00:07:43.161 10:04:16 -- common/autotest_common.sh@930 -- # kill -0 3272031 00:07:43.161 10:04:16 -- common/autotest_common.sh@931 -- # uname 00:07:43.161 10:04:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:43.161 10:04:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3272031 00:07:43.161 10:04:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:43.161 10:04:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:43.161 10:04:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3272031' 00:07:43.161 killing process with pid 3272031 00:07:43.161 10:04:16 -- common/autotest_common.sh@945 -- # kill 3272031 00:07:43.161 10:04:16 -- common/autotest_common.sh@950 -- # wait 3272031 00:07:43.420 00:07:43.420 real 0m1.610s 00:07:43.420 user 0m1.824s 00:07:43.420 sys 0m0.424s 00:07:43.420 10:04:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.420 10:04:16 -- common/autotest_common.sh@10 -- # set +x 00:07:43.420 ************************************ 00:07:43.420 END TEST dpdk_mem_utility 00:07:43.420 ************************************ 00:07:43.420 10:04:16 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:43.420 10:04:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:43.420 10:04:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.420 10:04:16 -- common/autotest_common.sh@10 -- # set +x 00:07:43.421 ************************************ 00:07:43.421 START TEST event 00:07:43.421 ************************************ 00:07:43.421 10:04:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:43.680 * Looking for test storage... 00:07:43.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:43.680 10:04:16 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:43.680 10:04:16 -- bdev/nbd_common.sh@6 -- # set -e 00:07:43.680 10:04:16 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:43.680 10:04:16 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:43.680 10:04:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.680 10:04:16 -- common/autotest_common.sh@10 -- # set +x 00:07:43.680 ************************************ 00:07:43.680 START TEST event_perf 00:07:43.680 ************************************ 00:07:43.680 10:04:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:43.680 Running I/O for 1 seconds...[2024-04-17 10:04:16.832986] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:43.680 [2024-04-17 10:04:16.833062] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3272351 ] 00:07:43.680 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.680 [2024-04-17 10:04:16.914282] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.680 [2024-04-17 10:04:17.002336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.680 [2024-04-17 10:04:17.002438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.680 [2024-04-17 10:04:17.002566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.680 [2024-04-17 10:04:17.002567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.056 Running I/O for 1 seconds... 00:07:45.056 lcore 0: 163361 00:07:45.056 lcore 1: 163359 00:07:45.056 lcore 2: 163359 00:07:45.056 lcore 3: 163361 00:07:45.056 done. 00:07:45.056 00:07:45.056 real 0m1.293s 00:07:45.056 user 0m4.191s 00:07:45.056 sys 0m0.097s 00:07:45.056 10:04:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.056 10:04:18 -- common/autotest_common.sh@10 -- # set +x 00:07:45.056 ************************************ 00:07:45.056 END TEST event_perf 00:07:45.056 ************************************ 00:07:45.056 10:04:18 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:45.056 10:04:18 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:45.056 10:04:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:45.056 10:04:18 -- common/autotest_common.sh@10 -- # set +x 00:07:45.056 ************************************ 00:07:45.056 START TEST event_reactor 00:07:45.056 ************************************ 00:07:45.056 10:04:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:45.056 [2024-04-17 10:04:18.161759] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:45.057 [2024-04-17 10:04:18.161838] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3272640 ] 00:07:45.057 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.057 [2024-04-17 10:04:18.241873] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.057 [2024-04-17 10:04:18.325616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.435 test_start 00:07:46.435 oneshot 00:07:46.435 tick 100 00:07:46.435 tick 100 00:07:46.435 tick 250 00:07:46.435 tick 100 00:07:46.435 tick 100 00:07:46.435 tick 100 00:07:46.435 tick 250 00:07:46.435 tick 500 00:07:46.435 tick 100 00:07:46.435 tick 100 00:07:46.435 tick 250 00:07:46.435 tick 100 00:07:46.435 tick 100 00:07:46.435 test_end 00:07:46.435 00:07:46.435 real 0m1.278s 00:07:46.435 user 0m1.185s 00:07:46.435 sys 0m0.088s 00:07:46.435 10:04:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.435 10:04:19 -- common/autotest_common.sh@10 -- # set +x 00:07:46.435 ************************************ 00:07:46.435 END TEST event_reactor 00:07:46.435 ************************************ 00:07:46.435 10:04:19 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:46.435 10:04:19 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:46.435 10:04:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.435 10:04:19 -- common/autotest_common.sh@10 -- # set +x 00:07:46.435 ************************************ 00:07:46.435 START TEST event_reactor_perf 00:07:46.435 ************************************ 00:07:46.435 10:04:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:46.435 [2024-04-17 10:04:19.473976] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:46.435 [2024-04-17 10:04:19.474055] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3272920 ] 00:07:46.435 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.435 [2024-04-17 10:04:19.553994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.435 [2024-04-17 10:04:19.635760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.811 test_start 00:07:47.811 test_end 00:07:47.811 Performance: 309161 events per second 00:07:47.811 00:07:47.811 real 0m1.283s 00:07:47.811 user 0m1.197s 00:07:47.811 sys 0m0.080s 00:07:47.811 10:04:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.811 10:04:20 -- common/autotest_common.sh@10 -- # set +x 00:07:47.811 ************************************ 00:07:47.811 END TEST event_reactor_perf 00:07:47.811 ************************************ 00:07:47.811 10:04:20 -- event/event.sh@49 -- # uname -s 00:07:47.811 10:04:20 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:47.811 10:04:20 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:47.811 10:04:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:47.811 10:04:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:47.811 10:04:20 -- common/autotest_common.sh@10 -- # set +x 00:07:47.811 ************************************ 00:07:47.811 START TEST event_scheduler 00:07:47.811 ************************************ 00:07:47.811 10:04:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:47.811 * Looking for test storage... 00:07:47.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:07:47.811 10:04:20 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:47.811 10:04:20 -- scheduler/scheduler.sh@35 -- # scheduler_pid=3273231 00:07:47.811 10:04:20 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:47.811 10:04:20 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:47.811 10:04:20 -- scheduler/scheduler.sh@37 -- # waitforlisten 3273231 00:07:47.811 10:04:20 -- common/autotest_common.sh@819 -- # '[' -z 3273231 ']' 00:07:47.811 10:04:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.811 10:04:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:47.811 10:04:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.811 10:04:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:47.811 10:04:20 -- common/autotest_common.sh@10 -- # set +x 00:07:47.811 [2024-04-17 10:04:20.903071] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:47.811 [2024-04-17 10:04:20.903138] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3273231 ] 00:07:47.811 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.811 [2024-04-17 10:04:20.962924] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.811 [2024-04-17 10:04:21.034962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.811 [2024-04-17 10:04:21.035058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.811 [2024-04-17 10:04:21.035083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.811 [2024-04-17 10:04:21.035083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.811 10:04:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:47.811 10:04:21 -- common/autotest_common.sh@852 -- # return 0 00:07:47.811 10:04:21 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:47.811 10:04:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:47.811 10:04:21 -- common/autotest_common.sh@10 -- # set +x 00:07:47.811 POWER: Env isn't set yet! 00:07:47.811 POWER: Attempting to initialise ACPI cpufreq power management... 00:07:47.811 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:47.811 POWER: Cannot set governor of lcore 0 to userspace 00:07:47.811 POWER: Attempting to initialise PSTAT power management... 00:07:47.811 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:07:47.811 POWER: Initialized successfully for lcore 0 power management 00:07:47.811 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:07:47.811 POWER: Initialized successfully for lcore 1 power management 00:07:48.070 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:07:48.070 POWER: Initialized successfully for lcore 2 power management 00:07:48.070 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:07:48.070 POWER: Initialized successfully for lcore 3 power management 00:07:48.070 10:04:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:48.070 10:04:21 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:48.070 10:04:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:48.070 10:04:21 -- common/autotest_common.sh@10 -- # set +x 00:07:48.070 [2024-04-17 10:04:21.230561] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:48.070 10:04:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:48.070 10:04:21 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:48.070 10:04:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:48.070 10:04:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:48.070 10:04:21 -- common/autotest_common.sh@10 -- # set +x 00:07:48.070 ************************************ 00:07:48.070 START TEST scheduler_create_thread 00:07:48.070 ************************************ 00:07:48.070 10:04:21 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:07:48.070 10:04:21 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:48.070 10:04:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:48.070 10:04:21 -- common/autotest_common.sh@10 -- # set +x 00:07:48.070 2 00:07:48.070 10:04:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:48.070 10:04:21 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:48.070 10:04:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:48.070 10:04:21 -- common/autotest_common.sh@10 -- # set +x 00:07:48.070 3 00:07:48.070 10:04:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:48.070 10:04:21 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:48.070 10:04:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:48.070 10:04:21 -- common/autotest_common.sh@10 -- # set +x 00:07:48.070 4 00:07:48.070 10:04:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:48.070 10:04:21 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:48.070 10:04:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:48.070 10:04:21 -- common/autotest_common.sh@10 -- # set +x 00:07:48.070 5 00:07:48.070 10:04:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:48.070 10:04:21 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:48.070 10:04:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:48.070 10:04:21 -- common/autotest_common.sh@10 -- # set +x 00:07:48.070 6 00:07:48.070 10:04:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:48.070 10:04:21 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:48.070 10:04:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:48.070 10:04:21 -- common/autotest_common.sh@10 -- # set +x 00:07:48.070 7 00:07:48.070 10:04:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:48.070 10:04:21 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:48.070 10:04:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:48.070 10:04:21 -- common/autotest_common.sh@10 -- # set +x 00:07:48.070 8 00:07:48.070 10:04:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:48.070 10:04:21 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:48.070 10:04:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:48.070 10:04:21 -- common/autotest_common.sh@10 -- # set +x 00:07:48.070 9 00:07:48.070 10:04:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:48.070 10:04:21 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:48.070 10:04:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:48.070 10:04:21 -- common/autotest_common.sh@10 -- # set +x 00:07:48.070 10 00:07:48.070 10:04:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:48.070 10:04:21 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:48.070 10:04:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:48.070 10:04:21 -- common/autotest_common.sh@10 -- # set +x 00:07:48.070 10:04:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:48.070 10:04:21 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:48.070 10:04:21 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:48.070 10:04:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:48.070 10:04:21 -- common/autotest_common.sh@10 -- # set +x 00:07:49.006 10:04:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.006 10:04:22 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:49.006 10:04:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.006 10:04:22 -- common/autotest_common.sh@10 -- # set +x 00:07:50.380 10:04:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.380 10:04:23 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:50.380 10:04:23 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:50.380 10:04:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.380 10:04:23 -- common/autotest_common.sh@10 -- # set +x 00:07:51.315 10:04:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:51.315 00:07:51.315 real 0m3.380s 00:07:51.315 user 0m0.025s 00:07:51.315 sys 0m0.004s 00:07:51.316 10:04:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.316 10:04:24 -- common/autotest_common.sh@10 -- # set +x 00:07:51.316 ************************************ 00:07:51.316 END TEST scheduler_create_thread 00:07:51.316 ************************************ 00:07:51.574 10:04:24 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:51.574 10:04:24 -- scheduler/scheduler.sh@46 -- # killprocess 3273231 00:07:51.574 10:04:24 -- common/autotest_common.sh@926 -- # '[' -z 3273231 ']' 00:07:51.574 10:04:24 -- common/autotest_common.sh@930 -- # kill -0 3273231 00:07:51.574 10:04:24 -- common/autotest_common.sh@931 -- # uname 00:07:51.574 10:04:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:51.574 10:04:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3273231 00:07:51.574 10:04:24 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:07:51.574 10:04:24 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:07:51.574 10:04:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3273231' 00:07:51.574 killing process with pid 3273231 00:07:51.574 10:04:24 -- common/autotest_common.sh@945 -- # kill 3273231 00:07:51.574 10:04:24 -- common/autotest_common.sh@950 -- # wait 3273231 00:07:51.833 [2024-04-17 10:04:24.998626] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:51.833 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:07:51.833 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:07:51.833 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:07:51.833 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:07:51.833 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:07:51.833 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:07:51.833 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:07:51.833 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:07:52.092 00:07:52.092 real 0m4.468s 00:07:52.092 user 0m8.001s 00:07:52.092 sys 0m0.316s 00:07:52.092 10:04:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.092 10:04:25 -- common/autotest_common.sh@10 -- # set +x 00:07:52.092 ************************************ 00:07:52.092 END TEST event_scheduler 00:07:52.092 ************************************ 00:07:52.092 10:04:25 -- event/event.sh@51 -- # modprobe -n nbd 00:07:52.092 10:04:25 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:52.092 10:04:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:52.092 10:04:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:52.092 10:04:25 -- common/autotest_common.sh@10 -- # set +x 00:07:52.092 ************************************ 00:07:52.092 START TEST app_repeat 00:07:52.092 ************************************ 00:07:52.092 10:04:25 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:07:52.092 10:04:25 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.092 10:04:25 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:52.092 10:04:25 -- event/event.sh@13 -- # local nbd_list 00:07:52.092 10:04:25 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:52.092 10:04:25 -- event/event.sh@14 -- # local bdev_list 00:07:52.092 10:04:25 -- event/event.sh@15 -- # local repeat_times=4 00:07:52.092 10:04:25 -- event/event.sh@17 -- # modprobe nbd 00:07:52.092 10:04:25 -- event/event.sh@19 -- # repeat_pid=3274084 00:07:52.092 10:04:25 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:52.092 10:04:25 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:52.092 10:04:25 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3274084' 00:07:52.092 Process app_repeat pid: 3274084 00:07:52.092 10:04:25 -- event/event.sh@23 -- # for i in {0..2} 00:07:52.092 10:04:25 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:52.092 spdk_app_start Round 0 00:07:52.092 10:04:25 -- event/event.sh@25 -- # waitforlisten 3274084 /var/tmp/spdk-nbd.sock 00:07:52.092 10:04:25 -- common/autotest_common.sh@819 -- # '[' -z 3274084 ']' 00:07:52.093 10:04:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:52.093 10:04:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:52.093 10:04:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:52.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:52.093 10:04:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:52.093 10:04:25 -- common/autotest_common.sh@10 -- # set +x 00:07:52.093 [2024-04-17 10:04:25.318366] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:52.093 [2024-04-17 10:04:25.318428] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3274084 ] 00:07:52.093 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.093 [2024-04-17 10:04:25.401363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:52.351 [2024-04-17 10:04:25.486103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.351 [2024-04-17 10:04:25.486107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.287 10:04:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:53.287 10:04:26 -- common/autotest_common.sh@852 -- # return 0 00:07:53.287 10:04:26 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:53.287 Malloc0 00:07:53.287 10:04:26 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:53.287 Malloc1 00:07:53.287 10:04:26 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:53.287 10:04:26 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.287 10:04:26 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:53.287 10:04:26 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:53.287 10:04:26 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:53.287 10:04:26 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:53.287 10:04:26 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:53.287 10:04:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.287 10:04:26 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:53.287 10:04:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:53.287 10:04:26 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:53.287 10:04:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:53.287 10:04:26 -- bdev/nbd_common.sh@12 -- # local i 00:07:53.287 10:04:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:53.287 10:04:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:53.287 10:04:26 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:53.546 /dev/nbd0 00:07:53.546 10:04:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:53.546 10:04:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:53.546 10:04:26 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:07:53.546 10:04:26 -- common/autotest_common.sh@857 -- # local i 00:07:53.546 10:04:26 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:53.546 10:04:26 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:53.546 10:04:26 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:07:53.546 10:04:26 -- common/autotest_common.sh@861 -- # break 00:07:53.546 10:04:26 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:53.546 10:04:26 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:53.546 10:04:26 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:53.546 1+0 records in 00:07:53.546 1+0 records out 00:07:53.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245479 s, 16.7 MB/s 00:07:53.546 10:04:26 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:53.546 10:04:26 -- common/autotest_common.sh@874 -- # size=4096 00:07:53.546 10:04:26 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:53.546 10:04:26 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:53.546 10:04:26 -- common/autotest_common.sh@877 -- # return 0 00:07:53.546 10:04:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:53.546 10:04:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:53.546 10:04:26 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:53.805 /dev/nbd1 00:07:53.805 10:04:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:53.805 10:04:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:53.805 10:04:27 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:07:53.805 10:04:27 -- common/autotest_common.sh@857 -- # local i 00:07:53.805 10:04:27 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:53.805 10:04:27 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:53.805 10:04:27 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:07:53.805 10:04:27 -- common/autotest_common.sh@861 -- # break 00:07:53.805 10:04:27 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:53.805 10:04:27 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:53.805 10:04:27 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:53.805 1+0 records in 00:07:53.805 1+0 records out 00:07:53.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197164 s, 20.8 MB/s 00:07:53.805 10:04:27 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:53.805 10:04:27 -- common/autotest_common.sh@874 -- # size=4096 00:07:53.805 10:04:27 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:53.805 10:04:27 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:53.805 10:04:27 -- common/autotest_common.sh@877 -- # return 0 00:07:53.805 10:04:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:53.805 10:04:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:53.805 10:04:27 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:53.805 10:04:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.805 10:04:27 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:54.063 10:04:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:54.063 { 00:07:54.063 "nbd_device": "/dev/nbd0", 00:07:54.063 "bdev_name": "Malloc0" 00:07:54.063 }, 00:07:54.063 { 00:07:54.063 "nbd_device": "/dev/nbd1", 00:07:54.063 "bdev_name": "Malloc1" 00:07:54.063 } 00:07:54.063 ]' 00:07:54.063 10:04:27 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:54.063 { 00:07:54.063 "nbd_device": "/dev/nbd0", 00:07:54.063 "bdev_name": "Malloc0" 00:07:54.063 }, 00:07:54.063 { 00:07:54.063 "nbd_device": "/dev/nbd1", 00:07:54.063 "bdev_name": "Malloc1" 00:07:54.063 } 00:07:54.063 ]' 00:07:54.063 10:04:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:54.063 10:04:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:54.063 /dev/nbd1' 00:07:54.063 10:04:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:54.063 10:04:27 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:54.063 /dev/nbd1' 00:07:54.063 10:04:27 -- bdev/nbd_common.sh@65 -- # count=2 00:07:54.063 10:04:27 -- bdev/nbd_common.sh@66 -- # echo 2 00:07:54.063 10:04:27 -- bdev/nbd_common.sh@95 -- # count=2 00:07:54.063 10:04:27 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:54.063 10:04:27 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:54.063 10:04:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:54.063 10:04:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:54.064 10:04:27 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:54.064 10:04:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:54.064 10:04:27 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:54.064 10:04:27 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:54.064 256+0 records in 00:07:54.064 256+0 records out 00:07:54.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00977028 s, 107 MB/s 00:07:54.064 10:04:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:54.064 10:04:27 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:54.064 256+0 records in 00:07:54.064 256+0 records out 00:07:54.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195722 s, 53.6 MB/s 00:07:54.064 10:04:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:54.064 10:04:27 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:54.064 256+0 records in 00:07:54.064 256+0 records out 00:07:54.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207661 s, 50.5 MB/s 00:07:54.064 10:04:27 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:54.064 10:04:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:54.323 10:04:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:54.323 10:04:27 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:54.323 10:04:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:54.323 10:04:27 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:54.323 10:04:27 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:54.323 10:04:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:54.323 10:04:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:54.323 10:04:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:54.323 10:04:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:54.323 10:04:27 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:54.323 10:04:27 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:54.323 10:04:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.323 10:04:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:54.323 10:04:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:54.323 10:04:27 -- bdev/nbd_common.sh@51 -- # local i 00:07:54.323 10:04:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.323 10:04:27 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:54.581 10:04:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:54.581 10:04:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:54.581 10:04:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:54.581 10:04:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.581 10:04:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.581 10:04:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:54.581 10:04:27 -- bdev/nbd_common.sh@41 -- # break 00:07:54.581 10:04:27 -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.582 10:04:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.582 10:04:27 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:54.840 10:04:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:54.840 10:04:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:54.840 10:04:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:54.840 10:04:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.840 10:04:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.840 10:04:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:54.840 10:04:27 -- bdev/nbd_common.sh@41 -- # break 00:07:54.840 10:04:27 -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.840 10:04:27 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:54.840 10:04:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.840 10:04:27 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:55.100 10:04:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:55.100 10:04:28 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:55.100 10:04:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:55.100 10:04:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:55.100 10:04:28 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:55.100 10:04:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:55.100 10:04:28 -- bdev/nbd_common.sh@65 -- # true 00:07:55.100 10:04:28 -- bdev/nbd_common.sh@65 -- # count=0 00:07:55.100 10:04:28 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:55.100 10:04:28 -- bdev/nbd_common.sh@104 -- # count=0 00:07:55.100 10:04:28 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:55.100 10:04:28 -- bdev/nbd_common.sh@109 -- # return 0 00:07:55.100 10:04:28 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:55.100 10:04:28 -- event/event.sh@35 -- # sleep 3 00:07:55.359 [2024-04-17 10:04:28.646012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:55.618 [2024-04-17 10:04:28.726381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.618 [2024-04-17 10:04:28.726386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.618 [2024-04-17 10:04:28.770827] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:55.618 [2024-04-17 10:04:28.770874] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:58.150 10:04:31 -- event/event.sh@23 -- # for i in {0..2} 00:07:58.150 10:04:31 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:58.150 spdk_app_start Round 1 00:07:58.150 10:04:31 -- event/event.sh@25 -- # waitforlisten 3274084 /var/tmp/spdk-nbd.sock 00:07:58.150 10:04:31 -- common/autotest_common.sh@819 -- # '[' -z 3274084 ']' 00:07:58.150 10:04:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:58.150 10:04:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:58.150 10:04:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:58.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:58.150 10:04:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:58.150 10:04:31 -- common/autotest_common.sh@10 -- # set +x 00:07:58.408 10:04:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:58.408 10:04:31 -- common/autotest_common.sh@852 -- # return 0 00:07:58.408 10:04:31 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:58.667 Malloc0 00:07:58.667 10:04:31 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:58.667 Malloc1 00:07:58.667 10:04:31 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:58.667 10:04:31 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.667 10:04:31 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:58.667 10:04:31 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:58.667 10:04:31 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:58.667 10:04:31 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:58.667 10:04:31 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:58.667 10:04:31 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.667 10:04:31 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:58.667 10:04:31 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:58.667 10:04:31 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:58.667 10:04:31 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:58.667 10:04:31 -- bdev/nbd_common.sh@12 -- # local i 00:07:58.667 10:04:31 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:58.667 10:04:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:58.667 10:04:31 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:58.925 /dev/nbd0 00:07:58.925 10:04:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:58.925 10:04:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:58.925 10:04:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:07:58.925 10:04:32 -- common/autotest_common.sh@857 -- # local i 00:07:58.925 10:04:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:58.925 10:04:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:58.925 10:04:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:07:58.925 10:04:32 -- common/autotest_common.sh@861 -- # break 00:07:58.925 10:04:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:58.925 10:04:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:58.925 10:04:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:58.925 1+0 records in 00:07:58.925 1+0 records out 00:07:58.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184715 s, 22.2 MB/s 00:07:58.925 10:04:32 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:58.925 10:04:32 -- common/autotest_common.sh@874 -- # size=4096 00:07:58.925 10:04:32 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:58.925 10:04:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:58.925 10:04:32 -- common/autotest_common.sh@877 -- # return 0 00:07:58.925 10:04:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:58.925 10:04:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:58.925 10:04:32 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:59.183 /dev/nbd1 00:07:59.183 10:04:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:59.183 10:04:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:59.183 10:04:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:07:59.183 10:04:32 -- common/autotest_common.sh@857 -- # local i 00:07:59.183 10:04:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:59.183 10:04:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:59.183 10:04:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:07:59.183 10:04:32 -- common/autotest_common.sh@861 -- # break 00:07:59.183 10:04:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:59.183 10:04:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:59.183 10:04:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:59.183 1+0 records in 00:07:59.183 1+0 records out 00:07:59.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247741 s, 16.5 MB/s 00:07:59.183 10:04:32 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:59.183 10:04:32 -- common/autotest_common.sh@874 -- # size=4096 00:07:59.183 10:04:32 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:59.183 10:04:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:59.183 10:04:32 -- common/autotest_common.sh@877 -- # return 0 00:07:59.183 10:04:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:59.183 10:04:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:59.183 10:04:32 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:59.183 10:04:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.183 10:04:32 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:59.442 10:04:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:59.442 { 00:07:59.442 "nbd_device": "/dev/nbd0", 00:07:59.442 "bdev_name": "Malloc0" 00:07:59.442 }, 00:07:59.442 { 00:07:59.442 "nbd_device": "/dev/nbd1", 00:07:59.442 "bdev_name": "Malloc1" 00:07:59.442 } 00:07:59.442 ]' 00:07:59.442 10:04:32 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:59.442 { 00:07:59.442 "nbd_device": "/dev/nbd0", 00:07:59.442 "bdev_name": "Malloc0" 00:07:59.442 }, 00:07:59.442 { 00:07:59.442 "nbd_device": "/dev/nbd1", 00:07:59.442 "bdev_name": "Malloc1" 00:07:59.442 } 00:07:59.442 ]' 00:07:59.442 10:04:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:59.442 10:04:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:59.442 /dev/nbd1' 00:07:59.442 10:04:32 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:59.442 /dev/nbd1' 00:07:59.442 10:04:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:59.442 10:04:32 -- bdev/nbd_common.sh@65 -- # count=2 00:07:59.442 10:04:32 -- bdev/nbd_common.sh@66 -- # echo 2 00:07:59.442 10:04:32 -- bdev/nbd_common.sh@95 -- # count=2 00:07:59.442 10:04:32 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:59.442 10:04:32 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:59.442 10:04:32 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:59.442 10:04:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:59.442 10:04:32 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:59.442 10:04:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:59.442 10:04:32 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:59.442 10:04:32 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:59.442 256+0 records in 00:07:59.442 256+0 records out 00:07:59.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00994059 s, 105 MB/s 00:07:59.442 10:04:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:59.442 10:04:32 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:59.442 256+0 records in 00:07:59.442 256+0 records out 00:07:59.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197911 s, 53.0 MB/s 00:07:59.442 10:04:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:59.442 10:04:32 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:59.701 256+0 records in 00:07:59.701 256+0 records out 00:07:59.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212609 s, 49.3 MB/s 00:07:59.701 10:04:32 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:59.701 10:04:32 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:59.701 10:04:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:59.701 10:04:32 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:59.701 10:04:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:59.701 10:04:32 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:59.701 10:04:32 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:59.701 10:04:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:59.701 10:04:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:59.701 10:04:32 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:59.701 10:04:32 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:59.701 10:04:32 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:59.701 10:04:32 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:59.701 10:04:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.701 10:04:32 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:59.701 10:04:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:59.701 10:04:32 -- bdev/nbd_common.sh@51 -- # local i 00:07:59.701 10:04:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.701 10:04:32 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:59.960 10:04:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:59.960 10:04:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:59.960 10:04:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:59.960 10:04:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.960 10:04:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.960 10:04:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:59.960 10:04:33 -- bdev/nbd_common.sh@41 -- # break 00:07:59.960 10:04:33 -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.960 10:04:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.960 10:04:33 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:59.960 10:04:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:59.960 10:04:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:59.960 10:04:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:59.960 10:04:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.960 10:04:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.960 10:04:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:59.960 10:04:33 -- bdev/nbd_common.sh@41 -- # break 00:07:59.960 10:04:33 -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.960 10:04:33 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:59.960 10:04:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.960 10:04:33 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:00.219 10:04:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:00.219 10:04:33 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:00.219 10:04:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:00.219 10:04:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:00.219 10:04:33 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:00.219 10:04:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:00.219 10:04:33 -- bdev/nbd_common.sh@65 -- # true 00:08:00.219 10:04:33 -- bdev/nbd_common.sh@65 -- # count=0 00:08:00.219 10:04:33 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:00.219 10:04:33 -- bdev/nbd_common.sh@104 -- # count=0 00:08:00.219 10:04:33 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:00.219 10:04:33 -- bdev/nbd_common.sh@109 -- # return 0 00:08:00.219 10:04:33 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:00.478 10:04:33 -- event/event.sh@35 -- # sleep 3 00:08:00.736 [2024-04-17 10:04:33.937827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:00.736 [2024-04-17 10:04:34.015763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.736 [2024-04-17 10:04:34.015768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.736 [2024-04-17 10:04:34.060669] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:00.736 [2024-04-17 10:04:34.060716] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:04.024 10:04:36 -- event/event.sh@23 -- # for i in {0..2} 00:08:04.024 10:04:36 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:04.024 spdk_app_start Round 2 00:08:04.024 10:04:36 -- event/event.sh@25 -- # waitforlisten 3274084 /var/tmp/spdk-nbd.sock 00:08:04.024 10:04:36 -- common/autotest_common.sh@819 -- # '[' -z 3274084 ']' 00:08:04.024 10:04:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:04.024 10:04:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:04.024 10:04:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:04.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:04.024 10:04:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:04.024 10:04:36 -- common/autotest_common.sh@10 -- # set +x 00:08:04.024 10:04:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:04.024 10:04:36 -- common/autotest_common.sh@852 -- # return 0 00:08:04.024 10:04:36 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:04.024 Malloc0 00:08:04.024 10:04:37 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:04.024 Malloc1 00:08:04.024 10:04:37 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:04.024 10:04:37 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.024 10:04:37 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:04.024 10:04:37 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:04.024 10:04:37 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:04.024 10:04:37 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:04.024 10:04:37 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:04.024 10:04:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.024 10:04:37 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:04.024 10:04:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:04.024 10:04:37 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:04.024 10:04:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:04.024 10:04:37 -- bdev/nbd_common.sh@12 -- # local i 00:08:04.024 10:04:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:04.024 10:04:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:04.024 10:04:37 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:04.024 /dev/nbd0 00:08:04.284 10:04:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:04.284 10:04:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:04.284 10:04:37 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:08:04.284 10:04:37 -- common/autotest_common.sh@857 -- # local i 00:08:04.284 10:04:37 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:04.284 10:04:37 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:04.284 10:04:37 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:08:04.284 10:04:37 -- common/autotest_common.sh@861 -- # break 00:08:04.284 10:04:37 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:04.284 10:04:37 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:04.284 10:04:37 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:04.284 1+0 records in 00:08:04.284 1+0 records out 00:08:04.284 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223354 s, 18.3 MB/s 00:08:04.284 10:04:37 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:04.284 10:04:37 -- common/autotest_common.sh@874 -- # size=4096 00:08:04.284 10:04:37 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:04.284 10:04:37 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:04.284 10:04:37 -- common/autotest_common.sh@877 -- # return 0 00:08:04.284 10:04:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.284 10:04:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:04.284 10:04:37 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:04.545 /dev/nbd1 00:08:04.545 10:04:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:04.545 10:04:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:04.545 10:04:37 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:08:04.545 10:04:37 -- common/autotest_common.sh@857 -- # local i 00:08:04.545 10:04:37 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:04.545 10:04:37 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:04.545 10:04:37 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:08:04.545 10:04:37 -- common/autotest_common.sh@861 -- # break 00:08:04.545 10:04:37 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:04.545 10:04:37 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:04.545 10:04:37 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:04.545 1+0 records in 00:08:04.545 1+0 records out 00:08:04.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210577 s, 19.5 MB/s 00:08:04.545 10:04:37 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:04.545 10:04:37 -- common/autotest_common.sh@874 -- # size=4096 00:08:04.545 10:04:37 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:04.545 10:04:37 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:04.545 10:04:37 -- common/autotest_common.sh@877 -- # return 0 00:08:04.545 10:04:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.545 10:04:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:04.545 10:04:37 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:04.545 10:04:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.545 10:04:37 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:04.817 10:04:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:04.817 { 00:08:04.817 "nbd_device": "/dev/nbd0", 00:08:04.817 "bdev_name": "Malloc0" 00:08:04.817 }, 00:08:04.817 { 00:08:04.817 "nbd_device": "/dev/nbd1", 00:08:04.817 "bdev_name": "Malloc1" 00:08:04.817 } 00:08:04.817 ]' 00:08:04.817 10:04:37 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:04.817 { 00:08:04.817 "nbd_device": "/dev/nbd0", 00:08:04.817 "bdev_name": "Malloc0" 00:08:04.817 }, 00:08:04.817 { 00:08:04.817 "nbd_device": "/dev/nbd1", 00:08:04.817 "bdev_name": "Malloc1" 00:08:04.817 } 00:08:04.817 ]' 00:08:04.817 10:04:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:04.817 10:04:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:04.817 /dev/nbd1' 00:08:04.817 10:04:37 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:04.817 /dev/nbd1' 00:08:04.817 10:04:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:04.817 10:04:37 -- bdev/nbd_common.sh@65 -- # count=2 00:08:04.817 10:04:37 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:04.817 10:04:37 -- bdev/nbd_common.sh@95 -- # count=2 00:08:04.817 10:04:37 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:04.817 10:04:37 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:04.817 10:04:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:04.817 10:04:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:04.817 10:04:37 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:04.817 10:04:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:04.817 10:04:37 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:04.817 10:04:37 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:04.817 256+0 records in 00:08:04.817 256+0 records out 00:08:04.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.009896 s, 106 MB/s 00:08:04.817 10:04:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.817 10:04:37 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:04.817 256+0 records in 00:08:04.817 256+0 records out 00:08:04.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193538 s, 54.2 MB/s 00:08:04.817 10:04:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.817 10:04:37 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:04.817 256+0 records in 00:08:04.817 256+0 records out 00:08:04.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207138 s, 50.6 MB/s 00:08:04.817 10:04:38 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:04.817 10:04:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:04.817 10:04:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:04.817 10:04:38 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:04.817 10:04:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:04.817 10:04:38 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:04.817 10:04:38 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:04.817 10:04:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.817 10:04:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:04.817 10:04:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.817 10:04:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:04.817 10:04:38 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:04.817 10:04:38 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:04.817 10:04:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.817 10:04:38 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:04.817 10:04:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:04.817 10:04:38 -- bdev/nbd_common.sh@51 -- # local i 00:08:04.817 10:04:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.817 10:04:38 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:05.127 10:04:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:05.127 10:04:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:05.127 10:04:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:05.127 10:04:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.127 10:04:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.127 10:04:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:05.127 10:04:38 -- bdev/nbd_common.sh@41 -- # break 00:08:05.127 10:04:38 -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.127 10:04:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.127 10:04:38 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:05.395 10:04:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:05.395 10:04:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:05.395 10:04:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:05.395 10:04:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.395 10:04:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.395 10:04:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:05.395 10:04:38 -- bdev/nbd_common.sh@41 -- # break 00:08:05.395 10:04:38 -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.395 10:04:38 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:05.395 10:04:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.395 10:04:38 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:05.654 10:04:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:05.654 10:04:38 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:05.654 10:04:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:05.654 10:04:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:05.654 10:04:38 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:05.654 10:04:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:05.654 10:04:38 -- bdev/nbd_common.sh@65 -- # true 00:08:05.654 10:04:38 -- bdev/nbd_common.sh@65 -- # count=0 00:08:05.654 10:04:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:05.654 10:04:38 -- bdev/nbd_common.sh@104 -- # count=0 00:08:05.654 10:04:38 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:05.654 10:04:38 -- bdev/nbd_common.sh@109 -- # return 0 00:08:05.654 10:04:38 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:05.913 10:04:39 -- event/event.sh@35 -- # sleep 3 00:08:06.173 [2024-04-17 10:04:39.352550] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:06.173 [2024-04-17 10:04:39.430879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.173 [2024-04-17 10:04:39.430883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.173 [2024-04-17 10:04:39.475789] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:06.173 [2024-04-17 10:04:39.475836] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:09.460 10:04:42 -- event/event.sh@38 -- # waitforlisten 3274084 /var/tmp/spdk-nbd.sock 00:08:09.460 10:04:42 -- common/autotest_common.sh@819 -- # '[' -z 3274084 ']' 00:08:09.460 10:04:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:09.460 10:04:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:09.461 10:04:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:09.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:09.461 10:04:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:09.461 10:04:42 -- common/autotest_common.sh@10 -- # set +x 00:08:09.461 10:04:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:09.461 10:04:42 -- common/autotest_common.sh@852 -- # return 0 00:08:09.461 10:04:42 -- event/event.sh@39 -- # killprocess 3274084 00:08:09.461 10:04:42 -- common/autotest_common.sh@926 -- # '[' -z 3274084 ']' 00:08:09.461 10:04:42 -- common/autotest_common.sh@930 -- # kill -0 3274084 00:08:09.461 10:04:42 -- common/autotest_common.sh@931 -- # uname 00:08:09.461 10:04:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:09.461 10:04:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3274084 00:08:09.461 10:04:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:09.461 10:04:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:09.461 10:04:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3274084' 00:08:09.461 killing process with pid 3274084 00:08:09.461 10:04:42 -- common/autotest_common.sh@945 -- # kill 3274084 00:08:09.461 10:04:42 -- common/autotest_common.sh@950 -- # wait 3274084 00:08:09.461 spdk_app_start is called in Round 0. 00:08:09.461 Shutdown signal received, stop current app iteration 00:08:09.461 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:08:09.461 spdk_app_start is called in Round 1. 00:08:09.461 Shutdown signal received, stop current app iteration 00:08:09.461 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:08:09.461 spdk_app_start is called in Round 2. 00:08:09.461 Shutdown signal received, stop current app iteration 00:08:09.461 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:08:09.461 spdk_app_start is called in Round 3. 00:08:09.461 Shutdown signal received, stop current app iteration 00:08:09.461 10:04:42 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:09.461 10:04:42 -- event/event.sh@42 -- # return 0 00:08:09.461 00:08:09.461 real 0m17.339s 00:08:09.461 user 0m37.720s 00:08:09.461 sys 0m2.771s 00:08:09.461 10:04:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.461 10:04:42 -- common/autotest_common.sh@10 -- # set +x 00:08:09.461 ************************************ 00:08:09.461 END TEST app_repeat 00:08:09.461 ************************************ 00:08:09.461 10:04:42 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:09.461 10:04:42 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:09.461 10:04:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:09.461 10:04:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:09.461 10:04:42 -- common/autotest_common.sh@10 -- # set +x 00:08:09.461 ************************************ 00:08:09.461 START TEST cpu_locks 00:08:09.461 ************************************ 00:08:09.461 10:04:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:09.461 * Looking for test storage... 00:08:09.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:09.461 10:04:42 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:09.461 10:04:42 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:09.461 10:04:42 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:09.461 10:04:42 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:09.461 10:04:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:09.461 10:04:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:09.461 10:04:42 -- common/autotest_common.sh@10 -- # set +x 00:08:09.461 ************************************ 00:08:09.461 START TEST default_locks 00:08:09.461 ************************************ 00:08:09.461 10:04:42 -- common/autotest_common.sh@1104 -- # default_locks 00:08:09.461 10:04:42 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3277511 00:08:09.461 10:04:42 -- event/cpu_locks.sh@47 -- # waitforlisten 3277511 00:08:09.461 10:04:42 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:09.461 10:04:42 -- common/autotest_common.sh@819 -- # '[' -z 3277511 ']' 00:08:09.461 10:04:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.461 10:04:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:09.461 10:04:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.461 10:04:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:09.461 10:04:42 -- common/autotest_common.sh@10 -- # set +x 00:08:09.720 [2024-04-17 10:04:42.806910] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:09.720 [2024-04-17 10:04:42.806974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3277511 ] 00:08:09.720 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.720 [2024-04-17 10:04:42.887377] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.720 [2024-04-17 10:04:42.972761] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:09.720 [2024-04-17 10:04:42.972918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.657 10:04:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:10.657 10:04:43 -- common/autotest_common.sh@852 -- # return 0 00:08:10.657 10:04:43 -- event/cpu_locks.sh@49 -- # locks_exist 3277511 00:08:10.657 10:04:43 -- event/cpu_locks.sh@22 -- # lslocks -p 3277511 00:08:10.657 10:04:43 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:10.657 lslocks: write error 00:08:10.657 10:04:43 -- event/cpu_locks.sh@50 -- # killprocess 3277511 00:08:10.657 10:04:43 -- common/autotest_common.sh@926 -- # '[' -z 3277511 ']' 00:08:10.657 10:04:43 -- common/autotest_common.sh@930 -- # kill -0 3277511 00:08:10.657 10:04:43 -- common/autotest_common.sh@931 -- # uname 00:08:10.657 10:04:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:10.657 10:04:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3277511 00:08:10.657 10:04:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:10.657 10:04:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:10.657 10:04:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3277511' 00:08:10.657 killing process with pid 3277511 00:08:10.657 10:04:43 -- common/autotest_common.sh@945 -- # kill 3277511 00:08:10.657 10:04:43 -- common/autotest_common.sh@950 -- # wait 3277511 00:08:11.226 10:04:44 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3277511 00:08:11.226 10:04:44 -- common/autotest_common.sh@640 -- # local es=0 00:08:11.226 10:04:44 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3277511 00:08:11.226 10:04:44 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:08:11.226 10:04:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:11.226 10:04:44 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:08:11.226 10:04:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:11.226 10:04:44 -- common/autotest_common.sh@643 -- # waitforlisten 3277511 00:08:11.226 10:04:44 -- common/autotest_common.sh@819 -- # '[' -z 3277511 ']' 00:08:11.226 10:04:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.226 10:04:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:11.226 10:04:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.226 10:04:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:11.226 10:04:44 -- common/autotest_common.sh@10 -- # set +x 00:08:11.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3277511) - No such process 00:08:11.226 ERROR: process (pid: 3277511) is no longer running 00:08:11.226 10:04:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:11.226 10:04:44 -- common/autotest_common.sh@852 -- # return 1 00:08:11.226 10:04:44 -- common/autotest_common.sh@643 -- # es=1 00:08:11.226 10:04:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:11.226 10:04:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:11.226 10:04:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:11.226 10:04:44 -- event/cpu_locks.sh@54 -- # no_locks 00:08:11.226 10:04:44 -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:11.226 10:04:44 -- event/cpu_locks.sh@26 -- # local lock_files 00:08:11.226 10:04:44 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:11.226 00:08:11.226 real 0m1.574s 00:08:11.226 user 0m1.743s 00:08:11.226 sys 0m0.487s 00:08:11.226 10:04:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.226 10:04:44 -- common/autotest_common.sh@10 -- # set +x 00:08:11.226 ************************************ 00:08:11.226 END TEST default_locks 00:08:11.226 ************************************ 00:08:11.226 10:04:44 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:11.226 10:04:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:11.226 10:04:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:11.226 10:04:44 -- common/autotest_common.sh@10 -- # set +x 00:08:11.226 ************************************ 00:08:11.226 START TEST default_locks_via_rpc 00:08:11.226 ************************************ 00:08:11.226 10:04:44 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:08:11.226 10:04:44 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3277823 00:08:11.226 10:04:44 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:11.226 10:04:44 -- event/cpu_locks.sh@63 -- # waitforlisten 3277823 00:08:11.226 10:04:44 -- common/autotest_common.sh@819 -- # '[' -z 3277823 ']' 00:08:11.226 10:04:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.226 10:04:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:11.226 10:04:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.226 10:04:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:11.226 10:04:44 -- common/autotest_common.sh@10 -- # set +x 00:08:11.226 [2024-04-17 10:04:44.414234] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:11.226 [2024-04-17 10:04:44.414294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3277823 ] 00:08:11.226 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.226 [2024-04-17 10:04:44.494899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.485 [2024-04-17 10:04:44.584911] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:11.485 [2024-04-17 10:04:44.585062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.053 10:04:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:12.053 10:04:45 -- common/autotest_common.sh@852 -- # return 0 00:08:12.053 10:04:45 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:12.053 10:04:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.053 10:04:45 -- common/autotest_common.sh@10 -- # set +x 00:08:12.053 10:04:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.053 10:04:45 -- event/cpu_locks.sh@67 -- # no_locks 00:08:12.053 10:04:45 -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:12.053 10:04:45 -- event/cpu_locks.sh@26 -- # local lock_files 00:08:12.053 10:04:45 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:12.053 10:04:45 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:12.053 10:04:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.053 10:04:45 -- common/autotest_common.sh@10 -- # set +x 00:08:12.053 10:04:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.053 10:04:45 -- event/cpu_locks.sh@71 -- # locks_exist 3277823 00:08:12.053 10:04:45 -- event/cpu_locks.sh@22 -- # lslocks -p 3277823 00:08:12.053 10:04:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:12.621 10:04:45 -- event/cpu_locks.sh@73 -- # killprocess 3277823 00:08:12.621 10:04:45 -- common/autotest_common.sh@926 -- # '[' -z 3277823 ']' 00:08:12.621 10:04:45 -- common/autotest_common.sh@930 -- # kill -0 3277823 00:08:12.621 10:04:45 -- common/autotest_common.sh@931 -- # uname 00:08:12.621 10:04:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:12.621 10:04:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3277823 00:08:12.621 10:04:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:12.621 10:04:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:12.621 10:04:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3277823' 00:08:12.621 killing process with pid 3277823 00:08:12.621 10:04:45 -- common/autotest_common.sh@945 -- # kill 3277823 00:08:12.621 10:04:45 -- common/autotest_common.sh@950 -- # wait 3277823 00:08:12.880 00:08:12.880 real 0m1.759s 00:08:12.880 user 0m1.842s 00:08:12.880 sys 0m0.586s 00:08:12.880 10:04:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.880 10:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:12.880 ************************************ 00:08:12.880 END TEST default_locks_via_rpc 00:08:12.880 ************************************ 00:08:12.880 10:04:46 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:12.880 10:04:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:12.880 10:04:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:12.880 10:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:12.880 ************************************ 00:08:12.880 START TEST non_locking_app_on_locked_coremask 00:08:12.880 ************************************ 00:08:12.880 10:04:46 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:08:12.880 10:04:46 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3278341 00:08:12.880 10:04:46 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:12.880 10:04:46 -- event/cpu_locks.sh@81 -- # waitforlisten 3278341 /var/tmp/spdk.sock 00:08:12.880 10:04:46 -- common/autotest_common.sh@819 -- # '[' -z 3278341 ']' 00:08:12.880 10:04:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.880 10:04:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:12.880 10:04:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.880 10:04:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:12.880 10:04:46 -- common/autotest_common.sh@10 -- # set +x 00:08:13.140 [2024-04-17 10:04:46.214258] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:13.140 [2024-04-17 10:04:46.214318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3278341 ] 00:08:13.140 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.140 [2024-04-17 10:04:46.294087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.140 [2024-04-17 10:04:46.380386] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:13.140 [2024-04-17 10:04:46.380540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.077 10:04:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:14.077 10:04:47 -- common/autotest_common.sh@852 -- # return 0 00:08:14.077 10:04:47 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3278371 00:08:14.077 10:04:47 -- event/cpu_locks.sh@85 -- # waitforlisten 3278371 /var/tmp/spdk2.sock 00:08:14.077 10:04:47 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:14.077 10:04:47 -- common/autotest_common.sh@819 -- # '[' -z 3278371 ']' 00:08:14.077 10:04:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:14.077 10:04:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:14.077 10:04:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:14.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:14.077 10:04:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:14.077 10:04:47 -- common/autotest_common.sh@10 -- # set +x 00:08:14.077 [2024-04-17 10:04:47.104423] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:14.077 [2024-04-17 10:04:47.104483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3278371 ] 00:08:14.077 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.077 [2024-04-17 10:04:47.209457] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:14.077 [2024-04-17 10:04:47.209487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.077 [2024-04-17 10:04:47.382613] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:14.077 [2024-04-17 10:04:47.382771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.013 10:04:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:15.013 10:04:48 -- common/autotest_common.sh@852 -- # return 0 00:08:15.013 10:04:48 -- event/cpu_locks.sh@87 -- # locks_exist 3278341 00:08:15.013 10:04:48 -- event/cpu_locks.sh@22 -- # lslocks -p 3278341 00:08:15.013 10:04:48 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:15.581 lslocks: write error 00:08:15.582 10:04:48 -- event/cpu_locks.sh@89 -- # killprocess 3278341 00:08:15.582 10:04:48 -- common/autotest_common.sh@926 -- # '[' -z 3278341 ']' 00:08:15.582 10:04:48 -- common/autotest_common.sh@930 -- # kill -0 3278341 00:08:15.582 10:04:48 -- common/autotest_common.sh@931 -- # uname 00:08:15.582 10:04:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:15.582 10:04:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3278341 00:08:15.582 10:04:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:15.582 10:04:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:15.582 10:04:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3278341' 00:08:15.582 killing process with pid 3278341 00:08:15.582 10:04:48 -- common/autotest_common.sh@945 -- # kill 3278341 00:08:15.582 10:04:48 -- common/autotest_common.sh@950 -- # wait 3278341 00:08:16.519 10:04:49 -- event/cpu_locks.sh@90 -- # killprocess 3278371 00:08:16.519 10:04:49 -- common/autotest_common.sh@926 -- # '[' -z 3278371 ']' 00:08:16.519 10:04:49 -- common/autotest_common.sh@930 -- # kill -0 3278371 00:08:16.519 10:04:49 -- common/autotest_common.sh@931 -- # uname 00:08:16.519 10:04:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:16.519 10:04:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3278371 00:08:16.519 10:04:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:16.519 10:04:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:16.519 10:04:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3278371' 00:08:16.519 killing process with pid 3278371 00:08:16.519 10:04:49 -- common/autotest_common.sh@945 -- # kill 3278371 00:08:16.519 10:04:49 -- common/autotest_common.sh@950 -- # wait 3278371 00:08:16.778 00:08:16.778 real 0m3.771s 00:08:16.778 user 0m4.139s 00:08:16.778 sys 0m1.063s 00:08:16.778 10:04:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.778 10:04:49 -- common/autotest_common.sh@10 -- # set +x 00:08:16.778 ************************************ 00:08:16.778 END TEST non_locking_app_on_locked_coremask 00:08:16.778 ************************************ 00:08:16.778 10:04:49 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:16.778 10:04:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:16.778 10:04:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:16.778 10:04:49 -- common/autotest_common.sh@10 -- # set +x 00:08:16.778 ************************************ 00:08:16.778 START TEST locking_app_on_unlocked_coremask 00:08:16.778 ************************************ 00:08:16.778 10:04:49 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:08:16.778 10:04:49 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3278935 00:08:16.778 10:04:49 -- event/cpu_locks.sh@99 -- # waitforlisten 3278935 /var/tmp/spdk.sock 00:08:16.778 10:04:49 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:16.778 10:04:49 -- common/autotest_common.sh@819 -- # '[' -z 3278935 ']' 00:08:16.778 10:04:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.778 10:04:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:16.778 10:04:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.778 10:04:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:16.778 10:04:49 -- common/autotest_common.sh@10 -- # set +x 00:08:16.778 [2024-04-17 10:04:50.035358] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:16.778 [2024-04-17 10:04:50.035421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3278935 ] 00:08:16.778 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.037 [2024-04-17 10:04:50.115829] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:17.037 [2024-04-17 10:04:50.115863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.037 [2024-04-17 10:04:50.205161] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:17.037 [2024-04-17 10:04:50.205313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.972 10:04:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:17.972 10:04:50 -- common/autotest_common.sh@852 -- # return 0 00:08:17.972 10:04:50 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3279201 00:08:17.972 10:04:50 -- event/cpu_locks.sh@103 -- # waitforlisten 3279201 /var/tmp/spdk2.sock 00:08:17.973 10:04:50 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:17.973 10:04:50 -- common/autotest_common.sh@819 -- # '[' -z 3279201 ']' 00:08:17.973 10:04:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:17.973 10:04:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:17.973 10:04:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:17.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:17.973 10:04:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:17.973 10:04:50 -- common/autotest_common.sh@10 -- # set +x 00:08:17.973 [2024-04-17 10:04:51.009292] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:17.973 [2024-04-17 10:04:51.009353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279201 ] 00:08:17.973 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.973 [2024-04-17 10:04:51.120035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.973 [2024-04-17 10:04:51.289704] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:17.973 [2024-04-17 10:04:51.289866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.910 10:04:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:18.910 10:04:51 -- common/autotest_common.sh@852 -- # return 0 00:08:18.910 10:04:51 -- event/cpu_locks.sh@105 -- # locks_exist 3279201 00:08:18.910 10:04:51 -- event/cpu_locks.sh@22 -- # lslocks -p 3279201 00:08:18.910 10:04:51 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:19.845 lslocks: write error 00:08:19.845 10:04:52 -- event/cpu_locks.sh@107 -- # killprocess 3278935 00:08:19.845 10:04:52 -- common/autotest_common.sh@926 -- # '[' -z 3278935 ']' 00:08:19.845 10:04:52 -- common/autotest_common.sh@930 -- # kill -0 3278935 00:08:19.845 10:04:52 -- common/autotest_common.sh@931 -- # uname 00:08:19.845 10:04:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:19.845 10:04:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3278935 00:08:19.845 10:04:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:19.845 10:04:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:19.845 10:04:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3278935' 00:08:19.845 killing process with pid 3278935 00:08:19.845 10:04:52 -- common/autotest_common.sh@945 -- # kill 3278935 00:08:19.845 10:04:52 -- common/autotest_common.sh@950 -- # wait 3278935 00:08:20.413 10:04:53 -- event/cpu_locks.sh@108 -- # killprocess 3279201 00:08:20.413 10:04:53 -- common/autotest_common.sh@926 -- # '[' -z 3279201 ']' 00:08:20.413 10:04:53 -- common/autotest_common.sh@930 -- # kill -0 3279201 00:08:20.413 10:04:53 -- common/autotest_common.sh@931 -- # uname 00:08:20.413 10:04:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:20.413 10:04:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3279201 00:08:20.413 10:04:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:20.413 10:04:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:20.413 10:04:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3279201' 00:08:20.413 killing process with pid 3279201 00:08:20.413 10:04:53 -- common/autotest_common.sh@945 -- # kill 3279201 00:08:20.413 10:04:53 -- common/autotest_common.sh@950 -- # wait 3279201 00:08:20.981 00:08:20.981 real 0m4.052s 00:08:20.981 user 0m4.483s 00:08:20.981 sys 0m1.157s 00:08:20.981 10:04:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.981 10:04:54 -- common/autotest_common.sh@10 -- # set +x 00:08:20.981 ************************************ 00:08:20.981 END TEST locking_app_on_unlocked_coremask 00:08:20.981 ************************************ 00:08:20.981 10:04:54 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:20.981 10:04:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:20.981 10:04:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.981 10:04:54 -- common/autotest_common.sh@10 -- # set +x 00:08:20.981 ************************************ 00:08:20.981 START TEST locking_app_on_locked_coremask 00:08:20.981 ************************************ 00:08:20.981 10:04:54 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:08:20.982 10:04:54 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3279768 00:08:20.982 10:04:54 -- event/cpu_locks.sh@116 -- # waitforlisten 3279768 /var/tmp/spdk.sock 00:08:20.982 10:04:54 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:20.982 10:04:54 -- common/autotest_common.sh@819 -- # '[' -z 3279768 ']' 00:08:20.982 10:04:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.982 10:04:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:20.982 10:04:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.982 10:04:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:20.982 10:04:54 -- common/autotest_common.sh@10 -- # set +x 00:08:20.982 [2024-04-17 10:04:54.128245] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:20.982 [2024-04-17 10:04:54.128312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279768 ] 00:08:20.982 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.982 [2024-04-17 10:04:54.208950] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.982 [2024-04-17 10:04:54.292169] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:20.982 [2024-04-17 10:04:54.292326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.918 10:04:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:21.918 10:04:55 -- common/autotest_common.sh@852 -- # return 0 00:08:21.918 10:04:55 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3280034 00:08:21.918 10:04:55 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3280034 /var/tmp/spdk2.sock 00:08:21.918 10:04:55 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:21.918 10:04:55 -- common/autotest_common.sh@640 -- # local es=0 00:08:21.918 10:04:55 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3280034 /var/tmp/spdk2.sock 00:08:21.919 10:04:55 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:08:21.919 10:04:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:21.919 10:04:55 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:08:21.919 10:04:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:21.919 10:04:55 -- common/autotest_common.sh@643 -- # waitforlisten 3280034 /var/tmp/spdk2.sock 00:08:21.919 10:04:55 -- common/autotest_common.sh@819 -- # '[' -z 3280034 ']' 00:08:21.919 10:04:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:21.919 10:04:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:21.919 10:04:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:21.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:21.919 10:04:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:21.919 10:04:55 -- common/autotest_common.sh@10 -- # set +x 00:08:21.919 [2024-04-17 10:04:55.105873] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:21.919 [2024-04-17 10:04:55.105932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3280034 ] 00:08:21.919 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.919 [2024-04-17 10:04:55.213974] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3279768 has claimed it. 00:08:21.919 [2024-04-17 10:04:55.214022] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:22.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3280034) - No such process 00:08:22.487 ERROR: process (pid: 3280034) is no longer running 00:08:22.487 10:04:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:22.487 10:04:55 -- common/autotest_common.sh@852 -- # return 1 00:08:22.487 10:04:55 -- common/autotest_common.sh@643 -- # es=1 00:08:22.487 10:04:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:22.487 10:04:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:22.487 10:04:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:22.487 10:04:55 -- event/cpu_locks.sh@122 -- # locks_exist 3279768 00:08:22.487 10:04:55 -- event/cpu_locks.sh@22 -- # lslocks -p 3279768 00:08:22.487 10:04:55 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:22.745 lslocks: write error 00:08:22.745 10:04:56 -- event/cpu_locks.sh@124 -- # killprocess 3279768 00:08:22.745 10:04:56 -- common/autotest_common.sh@926 -- # '[' -z 3279768 ']' 00:08:22.745 10:04:56 -- common/autotest_common.sh@930 -- # kill -0 3279768 00:08:22.745 10:04:56 -- common/autotest_common.sh@931 -- # uname 00:08:22.745 10:04:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:22.745 10:04:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3279768 00:08:23.004 10:04:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:23.004 10:04:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:23.004 10:04:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3279768' 00:08:23.004 killing process with pid 3279768 00:08:23.004 10:04:56 -- common/autotest_common.sh@945 -- # kill 3279768 00:08:23.004 10:04:56 -- common/autotest_common.sh@950 -- # wait 3279768 00:08:23.262 00:08:23.262 real 0m2.374s 00:08:23.262 user 0m2.740s 00:08:23.262 sys 0m0.636s 00:08:23.262 10:04:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.262 10:04:56 -- common/autotest_common.sh@10 -- # set +x 00:08:23.262 ************************************ 00:08:23.262 END TEST locking_app_on_locked_coremask 00:08:23.262 ************************************ 00:08:23.262 10:04:56 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:23.262 10:04:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:23.262 10:04:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:23.262 10:04:56 -- common/autotest_common.sh@10 -- # set +x 00:08:23.262 ************************************ 00:08:23.263 START TEST locking_overlapped_coremask 00:08:23.263 ************************************ 00:08:23.263 10:04:56 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:08:23.263 10:04:56 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3280326 00:08:23.263 10:04:56 -- event/cpu_locks.sh@133 -- # waitforlisten 3280326 /var/tmp/spdk.sock 00:08:23.263 10:04:56 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:23.263 10:04:56 -- common/autotest_common.sh@819 -- # '[' -z 3280326 ']' 00:08:23.263 10:04:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.263 10:04:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:23.263 10:04:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.263 10:04:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:23.263 10:04:56 -- common/autotest_common.sh@10 -- # set +x 00:08:23.263 [2024-04-17 10:04:56.541597] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:23.263 [2024-04-17 10:04:56.541670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3280326 ] 00:08:23.263 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.521 [2024-04-17 10:04:56.622614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:23.521 [2024-04-17 10:04:56.703522] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:23.521 [2024-04-17 10:04:56.703715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.521 [2024-04-17 10:04:56.703839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.521 [2024-04-17 10:04:56.703840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.089 10:04:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:24.089 10:04:57 -- common/autotest_common.sh@852 -- # return 0 00:08:24.089 10:04:57 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3280412 00:08:24.089 10:04:57 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3280412 /var/tmp/spdk2.sock 00:08:24.089 10:04:57 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:24.089 10:04:57 -- common/autotest_common.sh@640 -- # local es=0 00:08:24.089 10:04:57 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3280412 /var/tmp/spdk2.sock 00:08:24.089 10:04:57 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:08:24.089 10:04:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:24.089 10:04:57 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:08:24.089 10:04:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:24.089 10:04:57 -- common/autotest_common.sh@643 -- # waitforlisten 3280412 /var/tmp/spdk2.sock 00:08:24.089 10:04:57 -- common/autotest_common.sh@819 -- # '[' -z 3280412 ']' 00:08:24.089 10:04:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:24.089 10:04:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:24.089 10:04:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:24.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:24.089 10:04:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:24.089 10:04:57 -- common/autotest_common.sh@10 -- # set +x 00:08:24.347 [2024-04-17 10:04:57.441531] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:24.347 [2024-04-17 10:04:57.441592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3280412 ] 00:08:24.347 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.347 [2024-04-17 10:04:57.525034] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3280326 has claimed it. 00:08:24.347 [2024-04-17 10:04:57.525069] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:24.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3280412) - No such process 00:08:24.914 ERROR: process (pid: 3280412) is no longer running 00:08:24.914 10:04:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:24.914 10:04:58 -- common/autotest_common.sh@852 -- # return 1 00:08:24.914 10:04:58 -- common/autotest_common.sh@643 -- # es=1 00:08:24.914 10:04:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:24.914 10:04:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:24.914 10:04:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:24.914 10:04:58 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:24.914 10:04:58 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:24.914 10:04:58 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:24.914 10:04:58 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:24.914 10:04:58 -- event/cpu_locks.sh@141 -- # killprocess 3280326 00:08:24.914 10:04:58 -- common/autotest_common.sh@926 -- # '[' -z 3280326 ']' 00:08:24.914 10:04:58 -- common/autotest_common.sh@930 -- # kill -0 3280326 00:08:24.914 10:04:58 -- common/autotest_common.sh@931 -- # uname 00:08:24.914 10:04:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:24.914 10:04:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3280326 00:08:24.914 10:04:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:24.914 10:04:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:24.914 10:04:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3280326' 00:08:24.914 killing process with pid 3280326 00:08:24.914 10:04:58 -- common/autotest_common.sh@945 -- # kill 3280326 00:08:24.914 10:04:58 -- common/autotest_common.sh@950 -- # wait 3280326 00:08:25.482 00:08:25.482 real 0m2.055s 00:08:25.482 user 0m5.788s 00:08:25.482 sys 0m0.448s 00:08:25.482 10:04:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.482 10:04:58 -- common/autotest_common.sh@10 -- # set +x 00:08:25.482 ************************************ 00:08:25.482 END TEST locking_overlapped_coremask 00:08:25.482 ************************************ 00:08:25.482 10:04:58 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:25.482 10:04:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:25.482 10:04:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.482 10:04:58 -- common/autotest_common.sh@10 -- # set +x 00:08:25.482 ************************************ 00:08:25.482 START TEST locking_overlapped_coremask_via_rpc 00:08:25.482 ************************************ 00:08:25.482 10:04:58 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:08:25.482 10:04:58 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3280636 00:08:25.482 10:04:58 -- event/cpu_locks.sh@149 -- # waitforlisten 3280636 /var/tmp/spdk.sock 00:08:25.482 10:04:58 -- common/autotest_common.sh@819 -- # '[' -z 3280636 ']' 00:08:25.482 10:04:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.482 10:04:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:25.482 10:04:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.483 10:04:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:25.483 10:04:58 -- common/autotest_common.sh@10 -- # set +x 00:08:25.483 10:04:58 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:25.483 [2024-04-17 10:04:58.631170] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:25.483 [2024-04-17 10:04:58.631231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3280636 ] 00:08:25.483 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.483 [2024-04-17 10:04:58.710128] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:25.483 [2024-04-17 10:04:58.710160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:25.483 [2024-04-17 10:04:58.798934] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:25.483 [2024-04-17 10:04:58.799111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.483 [2024-04-17 10:04:58.799225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.483 [2024-04-17 10:04:58.799226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.420 10:04:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:26.420 10:04:59 -- common/autotest_common.sh@852 -- # return 0 00:08:26.420 10:04:59 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3280902 00:08:26.420 10:04:59 -- event/cpu_locks.sh@153 -- # waitforlisten 3280902 /var/tmp/spdk2.sock 00:08:26.420 10:04:59 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:26.420 10:04:59 -- common/autotest_common.sh@819 -- # '[' -z 3280902 ']' 00:08:26.420 10:04:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:26.420 10:04:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:26.420 10:04:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:26.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:26.420 10:04:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:26.420 10:04:59 -- common/autotest_common.sh@10 -- # set +x 00:08:26.420 [2024-04-17 10:04:59.610525] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:26.420 [2024-04-17 10:04:59.610571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3280902 ] 00:08:26.420 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.420 [2024-04-17 10:04:59.677192] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:26.420 [2024-04-17 10:04:59.677215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:26.679 [2024-04-17 10:04:59.813433] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:26.679 [2024-04-17 10:04:59.813582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.679 [2024-04-17 10:04:59.816683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.679 [2024-04-17 10:04:59.816684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:08:27.246 10:05:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:27.246 10:05:00 -- common/autotest_common.sh@852 -- # return 0 00:08:27.246 10:05:00 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:27.246 10:05:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:27.246 10:05:00 -- common/autotest_common.sh@10 -- # set +x 00:08:27.246 10:05:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:27.246 10:05:00 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:27.246 10:05:00 -- common/autotest_common.sh@640 -- # local es=0 00:08:27.246 10:05:00 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:27.246 10:05:00 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:08:27.246 10:05:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:27.246 10:05:00 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:08:27.246 10:05:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:27.246 10:05:00 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:27.246 10:05:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:27.246 10:05:00 -- common/autotest_common.sh@10 -- # set +x 00:08:27.246 [2024-04-17 10:05:00.568711] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3280636 has claimed it. 00:08:27.246 request: 00:08:27.246 { 00:08:27.246 "method": "framework_enable_cpumask_locks", 00:08:27.246 "req_id": 1 00:08:27.246 } 00:08:27.246 Got JSON-RPC error response 00:08:27.246 response: 00:08:27.246 { 00:08:27.246 "code": -32603, 00:08:27.246 "message": "Failed to claim CPU core: 2" 00:08:27.246 } 00:08:27.246 10:05:00 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:08:27.246 10:05:00 -- common/autotest_common.sh@643 -- # es=1 00:08:27.246 10:05:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:27.246 10:05:00 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:27.246 10:05:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:27.246 10:05:00 -- event/cpu_locks.sh@158 -- # waitforlisten 3280636 /var/tmp/spdk.sock 00:08:27.246 10:05:00 -- common/autotest_common.sh@819 -- # '[' -z 3280636 ']' 00:08:27.506 10:05:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.506 10:05:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:27.506 10:05:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.506 10:05:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:27.506 10:05:00 -- common/autotest_common.sh@10 -- # set +x 00:08:27.506 10:05:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:27.506 10:05:00 -- common/autotest_common.sh@852 -- # return 0 00:08:27.506 10:05:00 -- event/cpu_locks.sh@159 -- # waitforlisten 3280902 /var/tmp/spdk2.sock 00:08:27.506 10:05:00 -- common/autotest_common.sh@819 -- # '[' -z 3280902 ']' 00:08:27.506 10:05:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:27.506 10:05:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:27.506 10:05:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:27.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:27.506 10:05:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:27.506 10:05:00 -- common/autotest_common.sh@10 -- # set +x 00:08:27.765 10:05:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:27.765 10:05:01 -- common/autotest_common.sh@852 -- # return 0 00:08:27.765 10:05:01 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:27.765 10:05:01 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:27.765 10:05:01 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:27.765 10:05:01 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:27.765 00:08:27.765 real 0m2.495s 00:08:27.765 user 0m1.243s 00:08:27.765 sys 0m0.181s 00:08:27.765 10:05:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.765 10:05:01 -- common/autotest_common.sh@10 -- # set +x 00:08:27.765 ************************************ 00:08:27.765 END TEST locking_overlapped_coremask_via_rpc 00:08:27.765 ************************************ 00:08:28.028 10:05:01 -- event/cpu_locks.sh@174 -- # cleanup 00:08:28.028 10:05:01 -- event/cpu_locks.sh@15 -- # [[ -z 3280636 ]] 00:08:28.028 10:05:01 -- event/cpu_locks.sh@15 -- # killprocess 3280636 00:08:28.028 10:05:01 -- common/autotest_common.sh@926 -- # '[' -z 3280636 ']' 00:08:28.028 10:05:01 -- common/autotest_common.sh@930 -- # kill -0 3280636 00:08:28.028 10:05:01 -- common/autotest_common.sh@931 -- # uname 00:08:28.028 10:05:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:28.028 10:05:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3280636 00:08:28.028 10:05:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:28.028 10:05:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:28.028 10:05:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3280636' 00:08:28.028 killing process with pid 3280636 00:08:28.028 10:05:01 -- common/autotest_common.sh@945 -- # kill 3280636 00:08:28.028 10:05:01 -- common/autotest_common.sh@950 -- # wait 3280636 00:08:28.287 10:05:01 -- event/cpu_locks.sh@16 -- # [[ -z 3280902 ]] 00:08:28.287 10:05:01 -- event/cpu_locks.sh@16 -- # killprocess 3280902 00:08:28.287 10:05:01 -- common/autotest_common.sh@926 -- # '[' -z 3280902 ']' 00:08:28.287 10:05:01 -- common/autotest_common.sh@930 -- # kill -0 3280902 00:08:28.287 10:05:01 -- common/autotest_common.sh@931 -- # uname 00:08:28.287 10:05:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:28.287 10:05:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3280902 00:08:28.287 10:05:01 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:08:28.287 10:05:01 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:08:28.287 10:05:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3280902' 00:08:28.287 killing process with pid 3280902 00:08:28.287 10:05:01 -- common/autotest_common.sh@945 -- # kill 3280902 00:08:28.287 10:05:01 -- common/autotest_common.sh@950 -- # wait 3280902 00:08:28.855 10:05:01 -- event/cpu_locks.sh@18 -- # rm -f 00:08:28.855 10:05:01 -- event/cpu_locks.sh@1 -- # cleanup 00:08:28.855 10:05:01 -- event/cpu_locks.sh@15 -- # [[ -z 3280636 ]] 00:08:28.855 10:05:01 -- event/cpu_locks.sh@15 -- # killprocess 3280636 00:08:28.855 10:05:01 -- common/autotest_common.sh@926 -- # '[' -z 3280636 ']' 00:08:28.855 10:05:01 -- common/autotest_common.sh@930 -- # kill -0 3280636 00:08:28.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3280636) - No such process 00:08:28.855 10:05:01 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3280636 is not found' 00:08:28.856 Process with pid 3280636 is not found 00:08:28.856 10:05:01 -- event/cpu_locks.sh@16 -- # [[ -z 3280902 ]] 00:08:28.856 10:05:01 -- event/cpu_locks.sh@16 -- # killprocess 3280902 00:08:28.856 10:05:01 -- common/autotest_common.sh@926 -- # '[' -z 3280902 ']' 00:08:28.856 10:05:01 -- common/autotest_common.sh@930 -- # kill -0 3280902 00:08:28.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3280902) - No such process 00:08:28.856 10:05:01 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3280902 is not found' 00:08:28.856 Process with pid 3280902 is not found 00:08:28.856 10:05:01 -- event/cpu_locks.sh@18 -- # rm -f 00:08:28.856 00:08:28.856 real 0m19.254s 00:08:28.856 user 0m34.360s 00:08:28.856 sys 0m5.367s 00:08:28.856 10:05:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.856 10:05:01 -- common/autotest_common.sh@10 -- # set +x 00:08:28.856 ************************************ 00:08:28.856 END TEST cpu_locks 00:08:28.856 ************************************ 00:08:28.856 00:08:28.856 real 0m45.230s 00:08:28.856 user 1m26.771s 00:08:28.856 sys 0m8.964s 00:08:28.856 10:05:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.856 10:05:01 -- common/autotest_common.sh@10 -- # set +x 00:08:28.856 ************************************ 00:08:28.856 END TEST event 00:08:28.856 ************************************ 00:08:28.856 10:05:01 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:28.856 10:05:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:28.856 10:05:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:28.856 10:05:01 -- common/autotest_common.sh@10 -- # set +x 00:08:28.856 ************************************ 00:08:28.856 START TEST thread 00:08:28.856 ************************************ 00:08:28.856 10:05:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:28.856 * Looking for test storage... 00:08:28.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:08:28.856 10:05:02 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:28.856 10:05:02 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:08:28.856 10:05:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:28.856 10:05:02 -- common/autotest_common.sh@10 -- # set +x 00:08:28.856 ************************************ 00:08:28.856 START TEST thread_poller_perf 00:08:28.856 ************************************ 00:08:28.856 10:05:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:28.856 [2024-04-17 10:05:02.102142] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:28.856 [2024-04-17 10:05:02.102212] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3281526 ] 00:08:28.856 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.856 [2024-04-17 10:05:02.183413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.115 [2024-04-17 10:05:02.268847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.115 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:30.052 ====================================== 00:08:30.052 busy:2216092096 (cyc) 00:08:30.052 total_run_count: 249000 00:08:30.052 tsc_hz: 2200000000 (cyc) 00:08:30.052 ====================================== 00:08:30.052 poller_cost: 8899 (cyc), 4045 (nsec) 00:08:30.052 00:08:30.052 real 0m1.295s 00:08:30.052 user 0m1.197s 00:08:30.052 sys 0m0.092s 00:08:30.052 10:05:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.052 10:05:03 -- common/autotest_common.sh@10 -- # set +x 00:08:30.052 ************************************ 00:08:30.052 END TEST thread_poller_perf 00:08:30.052 ************************************ 00:08:30.313 10:05:03 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:30.313 10:05:03 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:08:30.313 10:05:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:30.313 10:05:03 -- common/autotest_common.sh@10 -- # set +x 00:08:30.313 ************************************ 00:08:30.313 START TEST thread_poller_perf 00:08:30.313 ************************************ 00:08:30.313 10:05:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:30.313 [2024-04-17 10:05:03.432998] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:30.313 [2024-04-17 10:05:03.433073] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3281788 ] 00:08:30.313 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.313 [2024-04-17 10:05:03.511797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.313 [2024-04-17 10:05:03.595094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.313 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:31.692 ====================================== 00:08:31.692 busy:2202760078 (cyc) 00:08:31.692 total_run_count: 3293000 00:08:31.692 tsc_hz: 2200000000 (cyc) 00:08:31.692 ====================================== 00:08:31.692 poller_cost: 668 (cyc), 303 (nsec) 00:08:31.692 00:08:31.692 real 0m1.277s 00:08:31.692 user 0m1.188s 00:08:31.692 sys 0m0.084s 00:08:31.692 10:05:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.692 10:05:04 -- common/autotest_common.sh@10 -- # set +x 00:08:31.692 ************************************ 00:08:31.692 END TEST thread_poller_perf 00:08:31.692 ************************************ 00:08:31.692 10:05:04 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:31.692 00:08:31.692 real 0m2.729s 00:08:31.692 user 0m2.441s 00:08:31.692 sys 0m0.297s 00:08:31.692 10:05:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.692 10:05:04 -- common/autotest_common.sh@10 -- # set +x 00:08:31.692 ************************************ 00:08:31.692 END TEST thread 00:08:31.692 ************************************ 00:08:31.692 10:05:04 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:08:31.692 10:05:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:31.692 10:05:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:31.692 10:05:04 -- common/autotest_common.sh@10 -- # set +x 00:08:31.692 ************************************ 00:08:31.692 START TEST accel 00:08:31.692 ************************************ 00:08:31.692 10:05:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:08:31.692 * Looking for test storage... 00:08:31.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:08:31.692 10:05:04 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:08:31.692 10:05:04 -- accel/accel.sh@74 -- # get_expected_opcs 00:08:31.692 10:05:04 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:31.692 10:05:04 -- accel/accel.sh@59 -- # spdk_tgt_pid=3282106 00:08:31.692 10:05:04 -- accel/accel.sh@60 -- # waitforlisten 3282106 00:08:31.692 10:05:04 -- common/autotest_common.sh@819 -- # '[' -z 3282106 ']' 00:08:31.692 10:05:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.692 10:05:04 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:31.692 10:05:04 -- accel/accel.sh@58 -- # build_accel_config 00:08:31.692 10:05:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:31.692 10:05:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.692 10:05:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:31.692 10:05:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:31.692 10:05:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:31.692 10:05:04 -- common/autotest_common.sh@10 -- # set +x 00:08:31.692 10:05:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:31.692 10:05:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:31.692 10:05:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:31.692 10:05:04 -- accel/accel.sh@41 -- # local IFS=, 00:08:31.692 10:05:04 -- accel/accel.sh@42 -- # jq -r . 00:08:31.692 [2024-04-17 10:05:04.903056] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:31.692 [2024-04-17 10:05:04.903125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3282106 ] 00:08:31.692 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.692 [2024-04-17 10:05:04.982259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.952 [2024-04-17 10:05:05.070672] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:31.952 [2024-04-17 10:05:05.070822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.520 10:05:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:32.520 10:05:05 -- common/autotest_common.sh@852 -- # return 0 00:08:32.520 10:05:05 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:32.520 10:05:05 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:08:32.520 10:05:05 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:32.520 10:05:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.520 10:05:05 -- common/autotest_common.sh@10 -- # set +x 00:08:32.520 10:05:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.779 10:05:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:32.779 10:05:05 -- accel/accel.sh@64 -- # IFS== 00:08:32.779 10:05:05 -- accel/accel.sh@64 -- # read -r opc module 00:08:32.779 10:05:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:32.780 10:05:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:32.780 10:05:05 -- accel/accel.sh@64 -- # IFS== 00:08:32.780 10:05:05 -- accel/accel.sh@64 -- # read -r opc module 00:08:32.780 10:05:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:32.780 10:05:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:32.780 10:05:05 -- accel/accel.sh@64 -- # IFS== 00:08:32.780 10:05:05 -- accel/accel.sh@64 -- # read -r opc module 00:08:32.780 10:05:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:32.780 10:05:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:32.780 10:05:05 -- accel/accel.sh@64 -- # IFS== 00:08:32.780 10:05:05 -- accel/accel.sh@64 -- # read -r opc module 00:08:32.780 10:05:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:32.780 10:05:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:32.780 10:05:05 -- accel/accel.sh@64 -- # IFS== 00:08:32.780 10:05:05 -- accel/accel.sh@64 -- # read -r opc module 00:08:32.780 10:05:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:32.780 10:05:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:32.780 10:05:05 -- accel/accel.sh@64 -- # IFS== 00:08:32.780 10:05:05 -- accel/accel.sh@64 -- # read -r opc module 00:08:32.780 10:05:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:32.780 10:05:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:32.780 10:05:05 -- accel/accel.sh@64 -- # IFS== 00:08:32.780 10:05:05 -- accel/accel.sh@64 -- # read -r opc module 00:08:32.780 10:05:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:32.780 10:05:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:32.780 10:05:05 -- accel/accel.sh@64 -- # IFS== 00:08:32.780 10:05:05 -- accel/accel.sh@64 -- # read -r opc module 00:08:32.780 10:05:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:32.780 10:05:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:32.780 10:05:05 -- accel/accel.sh@64 -- # IFS== 00:08:32.780 10:05:05 -- accel/accel.sh@64 -- # read -r opc module 00:08:32.780 10:05:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:32.780 10:05:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:32.780 10:05:05 -- accel/accel.sh@64 -- # IFS== 00:08:32.780 10:05:05 -- accel/accel.sh@64 -- # read -r opc module 00:08:32.780 10:05:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:32.780 10:05:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:32.780 10:05:05 -- accel/accel.sh@64 -- # IFS== 00:08:32.780 10:05:05 -- accel/accel.sh@64 -- # read -r opc module 00:08:32.780 10:05:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:32.780 10:05:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:32.780 10:05:05 -- accel/accel.sh@64 -- # IFS== 00:08:32.780 10:05:05 -- accel/accel.sh@64 -- # read -r opc module 00:08:32.780 10:05:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:32.780 10:05:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:32.780 10:05:05 -- accel/accel.sh@64 -- # IFS== 00:08:32.780 10:05:05 -- accel/accel.sh@64 -- # read -r opc module 00:08:32.780 10:05:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:32.780 10:05:05 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:08:32.780 10:05:05 -- accel/accel.sh@64 -- # IFS== 00:08:32.780 10:05:05 -- accel/accel.sh@64 -- # read -r opc module 00:08:32.780 10:05:05 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:08:32.780 10:05:05 -- accel/accel.sh@67 -- # killprocess 3282106 00:08:32.780 10:05:05 -- common/autotest_common.sh@926 -- # '[' -z 3282106 ']' 00:08:32.780 10:05:05 -- common/autotest_common.sh@930 -- # kill -0 3282106 00:08:32.780 10:05:05 -- common/autotest_common.sh@931 -- # uname 00:08:32.780 10:05:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:32.780 10:05:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3282106 00:08:32.780 10:05:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:32.780 10:05:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:32.780 10:05:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3282106' 00:08:32.780 killing process with pid 3282106 00:08:32.780 10:05:05 -- common/autotest_common.sh@945 -- # kill 3282106 00:08:32.780 10:05:05 -- common/autotest_common.sh@950 -- # wait 3282106 00:08:33.039 10:05:06 -- accel/accel.sh@68 -- # trap - ERR 00:08:33.039 10:05:06 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:08:33.039 10:05:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:33.039 10:05:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:33.039 10:05:06 -- common/autotest_common.sh@10 -- # set +x 00:08:33.039 10:05:06 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:08:33.040 10:05:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:08:33.040 10:05:06 -- accel/accel.sh@12 -- # build_accel_config 00:08:33.040 10:05:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:33.040 10:05:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.040 10:05:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.040 10:05:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:33.040 10:05:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:33.040 10:05:06 -- accel/accel.sh@41 -- # local IFS=, 00:08:33.040 10:05:06 -- accel/accel.sh@42 -- # jq -r . 00:08:33.040 10:05:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.040 10:05:06 -- common/autotest_common.sh@10 -- # set +x 00:08:33.040 10:05:06 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:08:33.040 10:05:06 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:33.040 10:05:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:33.040 10:05:06 -- common/autotest_common.sh@10 -- # set +x 00:08:33.040 ************************************ 00:08:33.040 START TEST accel_missing_filename 00:08:33.040 ************************************ 00:08:33.040 10:05:06 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:08:33.040 10:05:06 -- common/autotest_common.sh@640 -- # local es=0 00:08:33.040 10:05:06 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:08:33.040 10:05:06 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:08:33.040 10:05:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:33.040 10:05:06 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:08:33.040 10:05:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:33.040 10:05:06 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:08:33.040 10:05:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:08:33.040 10:05:06 -- accel/accel.sh@12 -- # build_accel_config 00:08:33.040 10:05:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:33.040 10:05:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.040 10:05:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.040 10:05:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:33.040 10:05:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:33.040 10:05:06 -- accel/accel.sh@41 -- # local IFS=, 00:08:33.040 10:05:06 -- accel/accel.sh@42 -- # jq -r . 00:08:33.308 [2024-04-17 10:05:06.384280] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:33.308 [2024-04-17 10:05:06.384348] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3282431 ] 00:08:33.308 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.308 [2024-04-17 10:05:06.458796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.308 [2024-04-17 10:05:06.542250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.308 [2024-04-17 10:05:06.586381] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:33.580 [2024-04-17 10:05:06.648319] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:08:33.580 A filename is required. 00:08:33.580 10:05:06 -- common/autotest_common.sh@643 -- # es=234 00:08:33.580 10:05:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:33.580 10:05:06 -- common/autotest_common.sh@652 -- # es=106 00:08:33.580 10:05:06 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:33.580 10:05:06 -- common/autotest_common.sh@660 -- # es=1 00:08:33.580 10:05:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:33.580 00:08:33.580 real 0m0.394s 00:08:33.580 user 0m0.310s 00:08:33.580 sys 0m0.124s 00:08:33.580 10:05:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.580 10:05:06 -- common/autotest_common.sh@10 -- # set +x 00:08:33.580 ************************************ 00:08:33.580 END TEST accel_missing_filename 00:08:33.580 ************************************ 00:08:33.580 10:05:06 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:33.580 10:05:06 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:08:33.580 10:05:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:33.580 10:05:06 -- common/autotest_common.sh@10 -- # set +x 00:08:33.580 ************************************ 00:08:33.580 START TEST accel_compress_verify 00:08:33.580 ************************************ 00:08:33.580 10:05:06 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:33.580 10:05:06 -- common/autotest_common.sh@640 -- # local es=0 00:08:33.580 10:05:06 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:33.580 10:05:06 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:08:33.580 10:05:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:33.580 10:05:06 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:08:33.580 10:05:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:33.580 10:05:06 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:33.580 10:05:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:33.580 10:05:06 -- accel/accel.sh@12 -- # build_accel_config 00:08:33.580 10:05:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:33.580 10:05:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.580 10:05:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.580 10:05:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:33.580 10:05:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:33.580 10:05:06 -- accel/accel.sh@41 -- # local IFS=, 00:08:33.580 10:05:06 -- accel/accel.sh@42 -- # jq -r . 00:08:33.580 [2024-04-17 10:05:06.809315] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:33.580 [2024-04-17 10:05:06.809395] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3282458 ] 00:08:33.580 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.580 [2024-04-17 10:05:06.890738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.866 [2024-04-17 10:05:06.976363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.866 [2024-04-17 10:05:07.021205] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:33.866 [2024-04-17 10:05:07.084091] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:08:33.866 00:08:33.866 Compression does not support the verify option, aborting. 00:08:33.866 10:05:07 -- common/autotest_common.sh@643 -- # es=161 00:08:33.866 10:05:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:33.866 10:05:07 -- common/autotest_common.sh@652 -- # es=33 00:08:33.866 10:05:07 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:33.866 10:05:07 -- common/autotest_common.sh@660 -- # es=1 00:08:33.866 10:05:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:33.866 00:08:33.866 real 0m0.401s 00:08:33.866 user 0m0.314s 00:08:33.866 sys 0m0.129s 00:08:33.866 10:05:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.866 10:05:07 -- common/autotest_common.sh@10 -- # set +x 00:08:33.867 ************************************ 00:08:33.867 END TEST accel_compress_verify 00:08:33.867 ************************************ 00:08:34.138 10:05:07 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:34.139 10:05:07 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:34.139 10:05:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:34.139 10:05:07 -- common/autotest_common.sh@10 -- # set +x 00:08:34.139 ************************************ 00:08:34.139 START TEST accel_wrong_workload 00:08:34.139 ************************************ 00:08:34.139 10:05:07 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:08:34.139 10:05:07 -- common/autotest_common.sh@640 -- # local es=0 00:08:34.139 10:05:07 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:34.139 10:05:07 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:08:34.139 10:05:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:34.139 10:05:07 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:08:34.139 10:05:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:34.139 10:05:07 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:08:34.139 10:05:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:34.139 10:05:07 -- accel/accel.sh@12 -- # build_accel_config 00:08:34.139 10:05:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:34.139 10:05:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:34.139 10:05:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:34.139 10:05:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:34.139 10:05:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:34.139 10:05:07 -- accel/accel.sh@41 -- # local IFS=, 00:08:34.139 10:05:07 -- accel/accel.sh@42 -- # jq -r . 00:08:34.139 Unsupported workload type: foobar 00:08:34.139 [2024-04-17 10:05:07.250912] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:34.139 accel_perf options: 00:08:34.139 [-h help message] 00:08:34.139 [-q queue depth per core] 00:08:34.139 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:34.139 [-T number of threads per core 00:08:34.139 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:34.139 [-t time in seconds] 00:08:34.139 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:34.139 [ dif_verify, , dif_generate, dif_generate_copy 00:08:34.139 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:34.139 [-l for compress/decompress workloads, name of uncompressed input file 00:08:34.139 [-S for crc32c workload, use this seed value (default 0) 00:08:34.139 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:34.139 [-f for fill workload, use this BYTE value (default 255) 00:08:34.139 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:34.139 [-y verify result if this switch is on] 00:08:34.139 [-a tasks to allocate per core (default: same value as -q)] 00:08:34.139 Can be used to spread operations across a wider range of memory. 00:08:34.139 10:05:07 -- common/autotest_common.sh@643 -- # es=1 00:08:34.139 10:05:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:34.139 10:05:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:34.139 10:05:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:34.139 00:08:34.139 real 0m0.035s 00:08:34.139 user 0m0.021s 00:08:34.139 sys 0m0.014s 00:08:34.139 10:05:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.139 10:05:07 -- common/autotest_common.sh@10 -- # set +x 00:08:34.139 ************************************ 00:08:34.139 END TEST accel_wrong_workload 00:08:34.139 ************************************ 00:08:34.139 Error: writing output failed: Broken pipe 00:08:34.139 10:05:07 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:34.139 10:05:07 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:08:34.139 10:05:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:34.139 10:05:07 -- common/autotest_common.sh@10 -- # set +x 00:08:34.139 ************************************ 00:08:34.139 START TEST accel_negative_buffers 00:08:34.139 ************************************ 00:08:34.139 10:05:07 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:34.139 10:05:07 -- common/autotest_common.sh@640 -- # local es=0 00:08:34.139 10:05:07 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:34.139 10:05:07 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:08:34.139 10:05:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:34.139 10:05:07 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:08:34.139 10:05:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:34.139 10:05:07 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:08:34.139 10:05:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:34.139 10:05:07 -- accel/accel.sh@12 -- # build_accel_config 00:08:34.139 10:05:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:34.139 10:05:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:34.139 10:05:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:34.139 10:05:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:34.139 10:05:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:34.139 10:05:07 -- accel/accel.sh@41 -- # local IFS=, 00:08:34.139 10:05:07 -- accel/accel.sh@42 -- # jq -r . 00:08:34.139 -x option must be non-negative. 00:08:34.139 [2024-04-17 10:05:07.317761] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:34.139 accel_perf options: 00:08:34.139 [-h help message] 00:08:34.139 [-q queue depth per core] 00:08:34.139 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:34.139 [-T number of threads per core 00:08:34.139 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:34.139 [-t time in seconds] 00:08:34.139 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:34.139 [ dif_verify, , dif_generate, dif_generate_copy 00:08:34.139 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:34.139 [-l for compress/decompress workloads, name of uncompressed input file 00:08:34.139 [-S for crc32c workload, use this seed value (default 0) 00:08:34.139 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:34.139 [-f for fill workload, use this BYTE value (default 255) 00:08:34.139 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:34.139 [-y verify result if this switch is on] 00:08:34.139 [-a tasks to allocate per core (default: same value as -q)] 00:08:34.139 Can be used to spread operations across a wider range of memory. 00:08:34.139 10:05:07 -- common/autotest_common.sh@643 -- # es=1 00:08:34.139 10:05:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:34.139 10:05:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:34.139 10:05:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:34.139 00:08:34.139 real 0m0.028s 00:08:34.139 user 0m0.012s 00:08:34.139 sys 0m0.016s 00:08:34.139 10:05:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.139 10:05:07 -- common/autotest_common.sh@10 -- # set +x 00:08:34.139 ************************************ 00:08:34.139 END TEST accel_negative_buffers 00:08:34.139 ************************************ 00:08:34.139 Error: writing output failed: Broken pipe 00:08:34.139 10:05:07 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:34.139 10:05:07 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:34.139 10:05:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:34.139 10:05:07 -- common/autotest_common.sh@10 -- # set +x 00:08:34.139 ************************************ 00:08:34.139 START TEST accel_crc32c 00:08:34.139 ************************************ 00:08:34.139 10:05:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:34.139 10:05:07 -- accel/accel.sh@16 -- # local accel_opc 00:08:34.139 10:05:07 -- accel/accel.sh@17 -- # local accel_module 00:08:34.139 10:05:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:34.139 10:05:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:34.139 10:05:07 -- accel/accel.sh@12 -- # build_accel_config 00:08:34.139 10:05:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:34.139 10:05:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:34.139 10:05:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:34.139 10:05:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:34.139 10:05:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:34.139 10:05:07 -- accel/accel.sh@41 -- # local IFS=, 00:08:34.139 10:05:07 -- accel/accel.sh@42 -- # jq -r . 00:08:34.139 [2024-04-17 10:05:07.385901] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:34.139 [2024-04-17 10:05:07.385972] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3282526 ] 00:08:34.139 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.139 [2024-04-17 10:05:07.461431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.401 [2024-04-17 10:05:07.547563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.781 10:05:08 -- accel/accel.sh@18 -- # out=' 00:08:35.781 SPDK Configuration: 00:08:35.781 Core mask: 0x1 00:08:35.781 00:08:35.781 Accel Perf Configuration: 00:08:35.781 Workload Type: crc32c 00:08:35.781 CRC-32C seed: 32 00:08:35.781 Transfer size: 4096 bytes 00:08:35.781 Vector count 1 00:08:35.781 Module: software 00:08:35.781 Queue depth: 32 00:08:35.781 Allocate depth: 32 00:08:35.781 # threads/core: 1 00:08:35.781 Run time: 1 seconds 00:08:35.781 Verify: Yes 00:08:35.781 00:08:35.781 Running for 1 seconds... 00:08:35.781 00:08:35.781 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:35.781 ------------------------------------------------------------------------------------ 00:08:35.781 0,0 353536/s 1381 MiB/s 0 0 00:08:35.781 ==================================================================================== 00:08:35.781 Total 353536/s 1381 MiB/s 0 0' 00:08:35.781 10:05:08 -- accel/accel.sh@20 -- # IFS=: 00:08:35.781 10:05:08 -- accel/accel.sh@20 -- # read -r var val 00:08:35.781 10:05:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:35.781 10:05:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:35.781 10:05:08 -- accel/accel.sh@12 -- # build_accel_config 00:08:35.781 10:05:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:35.781 10:05:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:35.781 10:05:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:35.781 10:05:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:35.781 10:05:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:35.781 10:05:08 -- accel/accel.sh@41 -- # local IFS=, 00:08:35.781 10:05:08 -- accel/accel.sh@42 -- # jq -r . 00:08:35.781 [2024-04-17 10:05:08.786496] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:35.781 [2024-04-17 10:05:08.786557] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3282791 ] 00:08:35.781 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.781 [2024-04-17 10:05:08.867548] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.781 [2024-04-17 10:05:08.950921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.781 10:05:08 -- accel/accel.sh@21 -- # val= 00:08:35.781 10:05:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.781 10:05:08 -- accel/accel.sh@20 -- # IFS=: 00:08:35.781 10:05:08 -- accel/accel.sh@20 -- # read -r var val 00:08:35.781 10:05:08 -- accel/accel.sh@21 -- # val= 00:08:35.781 10:05:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.781 10:05:08 -- accel/accel.sh@20 -- # IFS=: 00:08:35.781 10:05:08 -- accel/accel.sh@20 -- # read -r var val 00:08:35.781 10:05:08 -- accel/accel.sh@21 -- # val=0x1 00:08:35.781 10:05:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.781 10:05:08 -- accel/accel.sh@20 -- # IFS=: 00:08:35.781 10:05:08 -- accel/accel.sh@20 -- # read -r var val 00:08:35.781 10:05:08 -- accel/accel.sh@21 -- # val= 00:08:35.781 10:05:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.781 10:05:08 -- accel/accel.sh@20 -- # IFS=: 00:08:35.781 10:05:08 -- accel/accel.sh@20 -- # read -r var val 00:08:35.781 10:05:08 -- accel/accel.sh@21 -- # val= 00:08:35.781 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.781 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:08:35.781 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:08:35.781 10:05:09 -- accel/accel.sh@21 -- # val=crc32c 00:08:35.781 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.781 10:05:09 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:08:35.781 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:08:35.781 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:08:35.781 10:05:09 -- accel/accel.sh@21 -- # val=32 00:08:35.781 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.781 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:08:35.781 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:08:35.781 10:05:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:35.781 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.781 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:08:35.781 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:08:35.781 10:05:09 -- accel/accel.sh@21 -- # val= 00:08:35.781 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.781 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:08:35.781 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:08:35.781 10:05:09 -- accel/accel.sh@21 -- # val=software 00:08:35.781 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.781 10:05:09 -- accel/accel.sh@23 -- # accel_module=software 00:08:35.781 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:08:35.781 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:08:35.781 10:05:09 -- accel/accel.sh@21 -- # val=32 00:08:35.781 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.781 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:08:35.781 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:08:35.781 10:05:09 -- accel/accel.sh@21 -- # val=32 00:08:35.781 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.781 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:08:35.781 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:08:35.781 10:05:09 -- accel/accel.sh@21 -- # val=1 00:08:35.781 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.781 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:08:35.781 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:08:35.781 10:05:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:35.781 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.781 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:08:35.781 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:08:35.781 10:05:09 -- accel/accel.sh@21 -- # val=Yes 00:08:35.781 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.781 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:08:35.781 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:08:35.781 10:05:09 -- accel/accel.sh@21 -- # val= 00:08:35.781 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.781 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:08:35.781 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:08:35.781 10:05:09 -- accel/accel.sh@21 -- # val= 00:08:35.781 10:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:35.781 10:05:09 -- accel/accel.sh@20 -- # IFS=: 00:08:35.781 10:05:09 -- accel/accel.sh@20 -- # read -r var val 00:08:37.162 10:05:10 -- accel/accel.sh@21 -- # val= 00:08:37.162 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.162 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:08:37.162 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:08:37.162 10:05:10 -- accel/accel.sh@21 -- # val= 00:08:37.162 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.162 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:08:37.162 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:08:37.162 10:05:10 -- accel/accel.sh@21 -- # val= 00:08:37.162 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.162 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:08:37.162 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:08:37.162 10:05:10 -- accel/accel.sh@21 -- # val= 00:08:37.162 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.162 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:08:37.162 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:08:37.162 10:05:10 -- accel/accel.sh@21 -- # val= 00:08:37.162 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.162 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:08:37.162 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:08:37.162 10:05:10 -- accel/accel.sh@21 -- # val= 00:08:37.162 10:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.162 10:05:10 -- accel/accel.sh@20 -- # IFS=: 00:08:37.162 10:05:10 -- accel/accel.sh@20 -- # read -r var val 00:08:37.162 10:05:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:37.162 10:05:10 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:08:37.162 10:05:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:37.162 00:08:37.162 real 0m2.813s 00:08:37.162 user 0m2.540s 00:08:37.162 sys 0m0.277s 00:08:37.162 10:05:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.162 10:05:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.162 ************************************ 00:08:37.162 END TEST accel_crc32c 00:08:37.162 ************************************ 00:08:37.162 10:05:10 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:37.162 10:05:10 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:37.162 10:05:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:37.162 10:05:10 -- common/autotest_common.sh@10 -- # set +x 00:08:37.162 ************************************ 00:08:37.162 START TEST accel_crc32c_C2 00:08:37.162 ************************************ 00:08:37.162 10:05:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:37.162 10:05:10 -- accel/accel.sh@16 -- # local accel_opc 00:08:37.162 10:05:10 -- accel/accel.sh@17 -- # local accel_module 00:08:37.162 10:05:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:37.162 10:05:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:37.162 10:05:10 -- accel/accel.sh@12 -- # build_accel_config 00:08:37.162 10:05:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:37.162 10:05:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:37.162 10:05:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:37.162 10:05:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:37.162 10:05:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:37.162 10:05:10 -- accel/accel.sh@41 -- # local IFS=, 00:08:37.162 10:05:10 -- accel/accel.sh@42 -- # jq -r . 00:08:37.162 [2024-04-17 10:05:10.238608] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:37.162 [2024-04-17 10:05:10.238696] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283076 ] 00:08:37.162 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.162 [2024-04-17 10:05:10.312870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.162 [2024-04-17 10:05:10.395871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.543 10:05:11 -- accel/accel.sh@18 -- # out=' 00:08:38.543 SPDK Configuration: 00:08:38.543 Core mask: 0x1 00:08:38.543 00:08:38.543 Accel Perf Configuration: 00:08:38.543 Workload Type: crc32c 00:08:38.543 CRC-32C seed: 0 00:08:38.543 Transfer size: 4096 bytes 00:08:38.543 Vector count 2 00:08:38.543 Module: software 00:08:38.543 Queue depth: 32 00:08:38.543 Allocate depth: 32 00:08:38.543 # threads/core: 1 00:08:38.543 Run time: 1 seconds 00:08:38.543 Verify: Yes 00:08:38.543 00:08:38.543 Running for 1 seconds... 00:08:38.543 00:08:38.543 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:38.543 ------------------------------------------------------------------------------------ 00:08:38.543 0,0 279968/s 2187 MiB/s 0 0 00:08:38.543 ==================================================================================== 00:08:38.543 Total 279968/s 1093 MiB/s 0 0' 00:08:38.543 10:05:11 -- accel/accel.sh@20 -- # IFS=: 00:08:38.543 10:05:11 -- accel/accel.sh@20 -- # read -r var val 00:08:38.543 10:05:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:38.543 10:05:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:38.543 10:05:11 -- accel/accel.sh@12 -- # build_accel_config 00:08:38.543 10:05:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:38.543 10:05:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:38.543 10:05:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:38.543 10:05:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:38.543 10:05:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:38.543 10:05:11 -- accel/accel.sh@41 -- # local IFS=, 00:08:38.543 10:05:11 -- accel/accel.sh@42 -- # jq -r . 00:08:38.543 [2024-04-17 10:05:11.635265] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:38.543 [2024-04-17 10:05:11.635326] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283342 ] 00:08:38.543 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.543 [2024-04-17 10:05:11.715965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.543 [2024-04-17 10:05:11.800367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.543 10:05:11 -- accel/accel.sh@21 -- # val= 00:08:38.543 10:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.543 10:05:11 -- accel/accel.sh@20 -- # IFS=: 00:08:38.543 10:05:11 -- accel/accel.sh@20 -- # read -r var val 00:08:38.543 10:05:11 -- accel/accel.sh@21 -- # val= 00:08:38.543 10:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.543 10:05:11 -- accel/accel.sh@20 -- # IFS=: 00:08:38.543 10:05:11 -- accel/accel.sh@20 -- # read -r var val 00:08:38.543 10:05:11 -- accel/accel.sh@21 -- # val=0x1 00:08:38.543 10:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.543 10:05:11 -- accel/accel.sh@20 -- # IFS=: 00:08:38.543 10:05:11 -- accel/accel.sh@20 -- # read -r var val 00:08:38.543 10:05:11 -- accel/accel.sh@21 -- # val= 00:08:38.543 10:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.543 10:05:11 -- accel/accel.sh@20 -- # IFS=: 00:08:38.543 10:05:11 -- accel/accel.sh@20 -- # read -r var val 00:08:38.543 10:05:11 -- accel/accel.sh@21 -- # val= 00:08:38.543 10:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.543 10:05:11 -- accel/accel.sh@20 -- # IFS=: 00:08:38.543 10:05:11 -- accel/accel.sh@20 -- # read -r var val 00:08:38.543 10:05:11 -- accel/accel.sh@21 -- # val=crc32c 00:08:38.543 10:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.543 10:05:11 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:08:38.543 10:05:11 -- accel/accel.sh@20 -- # IFS=: 00:08:38.543 10:05:11 -- accel/accel.sh@20 -- # read -r var val 00:08:38.543 10:05:11 -- accel/accel.sh@21 -- # val=0 00:08:38.543 10:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.543 10:05:11 -- accel/accel.sh@20 -- # IFS=: 00:08:38.543 10:05:11 -- accel/accel.sh@20 -- # read -r var val 00:08:38.543 10:05:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:38.543 10:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.543 10:05:11 -- accel/accel.sh@20 -- # IFS=: 00:08:38.543 10:05:11 -- accel/accel.sh@20 -- # read -r var val 00:08:38.544 10:05:11 -- accel/accel.sh@21 -- # val= 00:08:38.544 10:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.544 10:05:11 -- accel/accel.sh@20 -- # IFS=: 00:08:38.544 10:05:11 -- accel/accel.sh@20 -- # read -r var val 00:08:38.544 10:05:11 -- accel/accel.sh@21 -- # val=software 00:08:38.544 10:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.544 10:05:11 -- accel/accel.sh@23 -- # accel_module=software 00:08:38.544 10:05:11 -- accel/accel.sh@20 -- # IFS=: 00:08:38.544 10:05:11 -- accel/accel.sh@20 -- # read -r var val 00:08:38.544 10:05:11 -- accel/accel.sh@21 -- # val=32 00:08:38.544 10:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.544 10:05:11 -- accel/accel.sh@20 -- # IFS=: 00:08:38.544 10:05:11 -- accel/accel.sh@20 -- # read -r var val 00:08:38.544 10:05:11 -- accel/accel.sh@21 -- # val=32 00:08:38.544 10:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.544 10:05:11 -- accel/accel.sh@20 -- # IFS=: 00:08:38.544 10:05:11 -- accel/accel.sh@20 -- # read -r var val 00:08:38.544 10:05:11 -- accel/accel.sh@21 -- # val=1 00:08:38.544 10:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.544 10:05:11 -- accel/accel.sh@20 -- # IFS=: 00:08:38.544 10:05:11 -- accel/accel.sh@20 -- # read -r var val 00:08:38.544 10:05:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:38.544 10:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.544 10:05:11 -- accel/accel.sh@20 -- # IFS=: 00:08:38.544 10:05:11 -- accel/accel.sh@20 -- # read -r var val 00:08:38.544 10:05:11 -- accel/accel.sh@21 -- # val=Yes 00:08:38.544 10:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.544 10:05:11 -- accel/accel.sh@20 -- # IFS=: 00:08:38.544 10:05:11 -- accel/accel.sh@20 -- # read -r var val 00:08:38.544 10:05:11 -- accel/accel.sh@21 -- # val= 00:08:38.544 10:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.544 10:05:11 -- accel/accel.sh@20 -- # IFS=: 00:08:38.544 10:05:11 -- accel/accel.sh@20 -- # read -r var val 00:08:38.544 10:05:11 -- accel/accel.sh@21 -- # val= 00:08:38.544 10:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:38.544 10:05:11 -- accel/accel.sh@20 -- # IFS=: 00:08:38.544 10:05:11 -- accel/accel.sh@20 -- # read -r var val 00:08:39.926 10:05:13 -- accel/accel.sh@21 -- # val= 00:08:39.926 10:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.926 10:05:13 -- accel/accel.sh@20 -- # IFS=: 00:08:39.926 10:05:13 -- accel/accel.sh@20 -- # read -r var val 00:08:39.926 10:05:13 -- accel/accel.sh@21 -- # val= 00:08:39.926 10:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.926 10:05:13 -- accel/accel.sh@20 -- # IFS=: 00:08:39.926 10:05:13 -- accel/accel.sh@20 -- # read -r var val 00:08:39.926 10:05:13 -- accel/accel.sh@21 -- # val= 00:08:39.926 10:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.926 10:05:13 -- accel/accel.sh@20 -- # IFS=: 00:08:39.926 10:05:13 -- accel/accel.sh@20 -- # read -r var val 00:08:39.926 10:05:13 -- accel/accel.sh@21 -- # val= 00:08:39.926 10:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.926 10:05:13 -- accel/accel.sh@20 -- # IFS=: 00:08:39.926 10:05:13 -- accel/accel.sh@20 -- # read -r var val 00:08:39.926 10:05:13 -- accel/accel.sh@21 -- # val= 00:08:39.926 10:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.926 10:05:13 -- accel/accel.sh@20 -- # IFS=: 00:08:39.926 10:05:13 -- accel/accel.sh@20 -- # read -r var val 00:08:39.926 10:05:13 -- accel/accel.sh@21 -- # val= 00:08:39.926 10:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.926 10:05:13 -- accel/accel.sh@20 -- # IFS=: 00:08:39.926 10:05:13 -- accel/accel.sh@20 -- # read -r var val 00:08:39.926 10:05:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:39.926 10:05:13 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:08:39.926 10:05:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:39.926 00:08:39.926 real 0m2.808s 00:08:39.926 user 0m2.544s 00:08:39.926 sys 0m0.269s 00:08:39.926 10:05:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.926 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:08:39.926 ************************************ 00:08:39.926 END TEST accel_crc32c_C2 00:08:39.926 ************************************ 00:08:39.926 10:05:13 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:39.926 10:05:13 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:39.926 10:05:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:39.926 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:08:39.926 ************************************ 00:08:39.926 START TEST accel_copy 00:08:39.926 ************************************ 00:08:39.926 10:05:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:08:39.926 10:05:13 -- accel/accel.sh@16 -- # local accel_opc 00:08:39.926 10:05:13 -- accel/accel.sh@17 -- # local accel_module 00:08:39.926 10:05:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:08:39.926 10:05:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:39.926 10:05:13 -- accel/accel.sh@12 -- # build_accel_config 00:08:39.926 10:05:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:39.926 10:05:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:39.926 10:05:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:39.926 10:05:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:39.926 10:05:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:39.926 10:05:13 -- accel/accel.sh@41 -- # local IFS=, 00:08:39.926 10:05:13 -- accel/accel.sh@42 -- # jq -r . 00:08:39.926 [2024-04-17 10:05:13.083722] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:39.926 [2024-04-17 10:05:13.083783] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283630 ] 00:08:39.926 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.926 [2024-04-17 10:05:13.164659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.926 [2024-04-17 10:05:13.248876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.304 10:05:14 -- accel/accel.sh@18 -- # out=' 00:08:41.304 SPDK Configuration: 00:08:41.304 Core mask: 0x1 00:08:41.304 00:08:41.304 Accel Perf Configuration: 00:08:41.304 Workload Type: copy 00:08:41.304 Transfer size: 4096 bytes 00:08:41.304 Vector count 1 00:08:41.305 Module: software 00:08:41.305 Queue depth: 32 00:08:41.305 Allocate depth: 32 00:08:41.305 # threads/core: 1 00:08:41.305 Run time: 1 seconds 00:08:41.305 Verify: Yes 00:08:41.305 00:08:41.305 Running for 1 seconds... 00:08:41.305 00:08:41.305 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:41.305 ------------------------------------------------------------------------------------ 00:08:41.305 0,0 260288/s 1016 MiB/s 0 0 00:08:41.305 ==================================================================================== 00:08:41.305 Total 260288/s 1016 MiB/s 0 0' 00:08:41.305 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:08:41.305 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:08:41.305 10:05:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:41.305 10:05:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:41.305 10:05:14 -- accel/accel.sh@12 -- # build_accel_config 00:08:41.305 10:05:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:41.305 10:05:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:41.305 10:05:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:41.305 10:05:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:41.305 10:05:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:41.305 10:05:14 -- accel/accel.sh@41 -- # local IFS=, 00:08:41.305 10:05:14 -- accel/accel.sh@42 -- # jq -r . 00:08:41.305 [2024-04-17 10:05:14.486879] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:41.305 [2024-04-17 10:05:14.486946] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283896 ] 00:08:41.305 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.305 [2024-04-17 10:05:14.565769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.564 [2024-04-17 10:05:14.649421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.564 10:05:14 -- accel/accel.sh@21 -- # val= 00:08:41.564 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:08:41.564 10:05:14 -- accel/accel.sh@21 -- # val= 00:08:41.564 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:08:41.564 10:05:14 -- accel/accel.sh@21 -- # val=0x1 00:08:41.564 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:08:41.564 10:05:14 -- accel/accel.sh@21 -- # val= 00:08:41.564 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:08:41.564 10:05:14 -- accel/accel.sh@21 -- # val= 00:08:41.564 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:08:41.564 10:05:14 -- accel/accel.sh@21 -- # val=copy 00:08:41.564 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.564 10:05:14 -- accel/accel.sh@24 -- # accel_opc=copy 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:08:41.564 10:05:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:41.564 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:08:41.564 10:05:14 -- accel/accel.sh@21 -- # val= 00:08:41.564 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:08:41.564 10:05:14 -- accel/accel.sh@21 -- # val=software 00:08:41.564 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.564 10:05:14 -- accel/accel.sh@23 -- # accel_module=software 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:08:41.564 10:05:14 -- accel/accel.sh@21 -- # val=32 00:08:41.564 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:08:41.564 10:05:14 -- accel/accel.sh@21 -- # val=32 00:08:41.564 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:08:41.564 10:05:14 -- accel/accel.sh@21 -- # val=1 00:08:41.564 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:08:41.564 10:05:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:41.564 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:08:41.564 10:05:14 -- accel/accel.sh@21 -- # val=Yes 00:08:41.564 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:08:41.564 10:05:14 -- accel/accel.sh@21 -- # val= 00:08:41.564 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:08:41.564 10:05:14 -- accel/accel.sh@21 -- # val= 00:08:41.564 10:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # IFS=: 00:08:41.564 10:05:14 -- accel/accel.sh@20 -- # read -r var val 00:08:42.944 10:05:15 -- accel/accel.sh@21 -- # val= 00:08:42.944 10:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.944 10:05:15 -- accel/accel.sh@20 -- # IFS=: 00:08:42.944 10:05:15 -- accel/accel.sh@20 -- # read -r var val 00:08:42.944 10:05:15 -- accel/accel.sh@21 -- # val= 00:08:42.944 10:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.944 10:05:15 -- accel/accel.sh@20 -- # IFS=: 00:08:42.944 10:05:15 -- accel/accel.sh@20 -- # read -r var val 00:08:42.944 10:05:15 -- accel/accel.sh@21 -- # val= 00:08:42.944 10:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.944 10:05:15 -- accel/accel.sh@20 -- # IFS=: 00:08:42.944 10:05:15 -- accel/accel.sh@20 -- # read -r var val 00:08:42.944 10:05:15 -- accel/accel.sh@21 -- # val= 00:08:42.944 10:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.944 10:05:15 -- accel/accel.sh@20 -- # IFS=: 00:08:42.944 10:05:15 -- accel/accel.sh@20 -- # read -r var val 00:08:42.944 10:05:15 -- accel/accel.sh@21 -- # val= 00:08:42.944 10:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.944 10:05:15 -- accel/accel.sh@20 -- # IFS=: 00:08:42.944 10:05:15 -- accel/accel.sh@20 -- # read -r var val 00:08:42.944 10:05:15 -- accel/accel.sh@21 -- # val= 00:08:42.944 10:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:42.944 10:05:15 -- accel/accel.sh@20 -- # IFS=: 00:08:42.944 10:05:15 -- accel/accel.sh@20 -- # read -r var val 00:08:42.944 10:05:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:42.944 10:05:15 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:08:42.944 10:05:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:42.944 00:08:42.944 real 0m2.810s 00:08:42.944 user 0m2.544s 00:08:42.944 sys 0m0.271s 00:08:42.944 10:05:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.944 10:05:15 -- common/autotest_common.sh@10 -- # set +x 00:08:42.944 ************************************ 00:08:42.944 END TEST accel_copy 00:08:42.944 ************************************ 00:08:42.944 10:05:15 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:42.944 10:05:15 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:08:42.944 10:05:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:42.944 10:05:15 -- common/autotest_common.sh@10 -- # set +x 00:08:42.944 ************************************ 00:08:42.944 START TEST accel_fill 00:08:42.944 ************************************ 00:08:42.944 10:05:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:42.944 10:05:15 -- accel/accel.sh@16 -- # local accel_opc 00:08:42.944 10:05:15 -- accel/accel.sh@17 -- # local accel_module 00:08:42.944 10:05:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:42.944 10:05:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:42.944 10:05:15 -- accel/accel.sh@12 -- # build_accel_config 00:08:42.944 10:05:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:42.944 10:05:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:42.944 10:05:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:42.944 10:05:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:42.944 10:05:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:42.944 10:05:15 -- accel/accel.sh@41 -- # local IFS=, 00:08:42.944 10:05:15 -- accel/accel.sh@42 -- # jq -r . 00:08:42.944 [2024-04-17 10:05:15.932323] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:42.944 [2024-04-17 10:05:15.932381] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284175 ] 00:08:42.944 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.944 [2024-04-17 10:05:16.013132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.944 [2024-04-17 10:05:16.097192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.323 10:05:17 -- accel/accel.sh@18 -- # out=' 00:08:44.323 SPDK Configuration: 00:08:44.323 Core mask: 0x1 00:08:44.323 00:08:44.323 Accel Perf Configuration: 00:08:44.323 Workload Type: fill 00:08:44.323 Fill pattern: 0x80 00:08:44.323 Transfer size: 4096 bytes 00:08:44.323 Vector count 1 00:08:44.323 Module: software 00:08:44.323 Queue depth: 64 00:08:44.323 Allocate depth: 64 00:08:44.323 # threads/core: 1 00:08:44.323 Run time: 1 seconds 00:08:44.323 Verify: Yes 00:08:44.323 00:08:44.323 Running for 1 seconds... 00:08:44.323 00:08:44.323 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:44.323 ------------------------------------------------------------------------------------ 00:08:44.323 0,0 412544/s 1611 MiB/s 0 0 00:08:44.323 ==================================================================================== 00:08:44.323 Total 412544/s 1611 MiB/s 0 0' 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:08:44.323 10:05:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:44.323 10:05:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:44.323 10:05:17 -- accel/accel.sh@12 -- # build_accel_config 00:08:44.323 10:05:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:44.323 10:05:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:44.323 10:05:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:44.323 10:05:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:44.323 10:05:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:44.323 10:05:17 -- accel/accel.sh@41 -- # local IFS=, 00:08:44.323 10:05:17 -- accel/accel.sh@42 -- # jq -r . 00:08:44.323 [2024-04-17 10:05:17.336451] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:44.323 [2024-04-17 10:05:17.336523] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284450 ] 00:08:44.323 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.323 [2024-04-17 10:05:17.415613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.323 [2024-04-17 10:05:17.499576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.323 10:05:17 -- accel/accel.sh@21 -- # val= 00:08:44.323 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:08:44.323 10:05:17 -- accel/accel.sh@21 -- # val= 00:08:44.323 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:08:44.323 10:05:17 -- accel/accel.sh@21 -- # val=0x1 00:08:44.323 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:08:44.323 10:05:17 -- accel/accel.sh@21 -- # val= 00:08:44.323 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:08:44.323 10:05:17 -- accel/accel.sh@21 -- # val= 00:08:44.323 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:08:44.323 10:05:17 -- accel/accel.sh@21 -- # val=fill 00:08:44.323 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.323 10:05:17 -- accel/accel.sh@24 -- # accel_opc=fill 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:08:44.323 10:05:17 -- accel/accel.sh@21 -- # val=0x80 00:08:44.323 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:08:44.323 10:05:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:44.323 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:08:44.323 10:05:17 -- accel/accel.sh@21 -- # val= 00:08:44.323 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:08:44.323 10:05:17 -- accel/accel.sh@21 -- # val=software 00:08:44.323 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.323 10:05:17 -- accel/accel.sh@23 -- # accel_module=software 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:08:44.323 10:05:17 -- accel/accel.sh@21 -- # val=64 00:08:44.323 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:08:44.323 10:05:17 -- accel/accel.sh@21 -- # val=64 00:08:44.323 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:08:44.323 10:05:17 -- accel/accel.sh@21 -- # val=1 00:08:44.323 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:08:44.323 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:08:44.323 10:05:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:44.323 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.324 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:08:44.324 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:08:44.324 10:05:17 -- accel/accel.sh@21 -- # val=Yes 00:08:44.324 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.324 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:08:44.324 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:08:44.324 10:05:17 -- accel/accel.sh@21 -- # val= 00:08:44.324 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.324 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:08:44.324 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:08:44.324 10:05:17 -- accel/accel.sh@21 -- # val= 00:08:44.324 10:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.324 10:05:17 -- accel/accel.sh@20 -- # IFS=: 00:08:44.324 10:05:17 -- accel/accel.sh@20 -- # read -r var val 00:08:45.707 10:05:18 -- accel/accel.sh@21 -- # val= 00:08:45.708 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.708 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:08:45.708 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:08:45.708 10:05:18 -- accel/accel.sh@21 -- # val= 00:08:45.708 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.708 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:08:45.708 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:08:45.708 10:05:18 -- accel/accel.sh@21 -- # val= 00:08:45.708 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.708 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:08:45.708 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:08:45.708 10:05:18 -- accel/accel.sh@21 -- # val= 00:08:45.708 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.708 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:08:45.708 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:08:45.708 10:05:18 -- accel/accel.sh@21 -- # val= 00:08:45.708 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.708 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:08:45.708 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:08:45.708 10:05:18 -- accel/accel.sh@21 -- # val= 00:08:45.708 10:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.708 10:05:18 -- accel/accel.sh@20 -- # IFS=: 00:08:45.708 10:05:18 -- accel/accel.sh@20 -- # read -r var val 00:08:45.708 10:05:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:45.708 10:05:18 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:08:45.708 10:05:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:45.708 00:08:45.708 real 0m2.812s 00:08:45.708 user 0m2.559s 00:08:45.708 sys 0m0.260s 00:08:45.708 10:05:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.708 10:05:18 -- common/autotest_common.sh@10 -- # set +x 00:08:45.708 ************************************ 00:08:45.708 END TEST accel_fill 00:08:45.708 ************************************ 00:08:45.708 10:05:18 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:45.708 10:05:18 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:45.708 10:05:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:45.708 10:05:18 -- common/autotest_common.sh@10 -- # set +x 00:08:45.708 ************************************ 00:08:45.708 START TEST accel_copy_crc32c 00:08:45.708 ************************************ 00:08:45.708 10:05:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:08:45.708 10:05:18 -- accel/accel.sh@16 -- # local accel_opc 00:08:45.708 10:05:18 -- accel/accel.sh@17 -- # local accel_module 00:08:45.708 10:05:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:45.708 10:05:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:45.708 10:05:18 -- accel/accel.sh@12 -- # build_accel_config 00:08:45.708 10:05:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:45.708 10:05:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:45.708 10:05:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:45.708 10:05:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:45.708 10:05:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:45.708 10:05:18 -- accel/accel.sh@41 -- # local IFS=, 00:08:45.708 10:05:18 -- accel/accel.sh@42 -- # jq -r . 00:08:45.708 [2024-04-17 10:05:18.781507] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:45.708 [2024-04-17 10:05:18.781571] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284729 ] 00:08:45.708 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.708 [2024-04-17 10:05:18.862581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.708 [2024-04-17 10:05:18.948743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.086 10:05:20 -- accel/accel.sh@18 -- # out=' 00:08:47.086 SPDK Configuration: 00:08:47.086 Core mask: 0x1 00:08:47.086 00:08:47.086 Accel Perf Configuration: 00:08:47.086 Workload Type: copy_crc32c 00:08:47.086 CRC-32C seed: 0 00:08:47.086 Vector size: 4096 bytes 00:08:47.086 Transfer size: 4096 bytes 00:08:47.086 Vector count 1 00:08:47.086 Module: software 00:08:47.086 Queue depth: 32 00:08:47.086 Allocate depth: 32 00:08:47.086 # threads/core: 1 00:08:47.086 Run time: 1 seconds 00:08:47.086 Verify: Yes 00:08:47.086 00:08:47.086 Running for 1 seconds... 00:08:47.086 00:08:47.086 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:47.086 ------------------------------------------------------------------------------------ 00:08:47.086 0,0 201888/s 788 MiB/s 0 0 00:08:47.086 ==================================================================================== 00:08:47.086 Total 201888/s 788 MiB/s 0 0' 00:08:47.086 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:08:47.086 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:08:47.086 10:05:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:47.086 10:05:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:47.086 10:05:20 -- accel/accel.sh@12 -- # build_accel_config 00:08:47.086 10:05:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:47.086 10:05:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:47.086 10:05:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:47.086 10:05:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:47.086 10:05:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:47.086 10:05:20 -- accel/accel.sh@41 -- # local IFS=, 00:08:47.086 10:05:20 -- accel/accel.sh@42 -- # jq -r . 00:08:47.086 [2024-04-17 10:05:20.190150] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:47.086 [2024-04-17 10:05:20.190209] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3285001 ] 00:08:47.086 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.086 [2024-04-17 10:05:20.271006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.086 [2024-04-17 10:05:20.354723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.086 10:05:20 -- accel/accel.sh@21 -- # val= 00:08:47.086 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.086 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:08:47.086 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:08:47.086 10:05:20 -- accel/accel.sh@21 -- # val= 00:08:47.086 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.086 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:08:47.086 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:08:47.086 10:05:20 -- accel/accel.sh@21 -- # val=0x1 00:08:47.086 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.086 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:08:47.086 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:08:47.086 10:05:20 -- accel/accel.sh@21 -- # val= 00:08:47.086 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.086 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:08:47.086 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:08:47.086 10:05:20 -- accel/accel.sh@21 -- # val= 00:08:47.086 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.086 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:08:47.086 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:08:47.086 10:05:20 -- accel/accel.sh@21 -- # val=copy_crc32c 00:08:47.086 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.086 10:05:20 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:08:47.086 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:08:47.086 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:08:47.086 10:05:20 -- accel/accel.sh@21 -- # val=0 00:08:47.086 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.086 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:08:47.086 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:08:47.086 10:05:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:47.086 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.087 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:08:47.087 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:08:47.087 10:05:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:47.087 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.087 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:08:47.087 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:08:47.087 10:05:20 -- accel/accel.sh@21 -- # val= 00:08:47.087 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.087 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:08:47.087 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:08:47.087 10:05:20 -- accel/accel.sh@21 -- # val=software 00:08:47.087 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.087 10:05:20 -- accel/accel.sh@23 -- # accel_module=software 00:08:47.087 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:08:47.087 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:08:47.087 10:05:20 -- accel/accel.sh@21 -- # val=32 00:08:47.087 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.087 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:08:47.087 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:08:47.087 10:05:20 -- accel/accel.sh@21 -- # val=32 00:08:47.087 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.087 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:08:47.087 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:08:47.087 10:05:20 -- accel/accel.sh@21 -- # val=1 00:08:47.087 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.087 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:08:47.087 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:08:47.087 10:05:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:47.087 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.087 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:08:47.087 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:08:47.087 10:05:20 -- accel/accel.sh@21 -- # val=Yes 00:08:47.087 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.087 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:08:47.087 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:08:47.087 10:05:20 -- accel/accel.sh@21 -- # val= 00:08:47.087 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.087 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:08:47.087 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:08:47.087 10:05:20 -- accel/accel.sh@21 -- # val= 00:08:47.087 10:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:47.087 10:05:20 -- accel/accel.sh@20 -- # IFS=: 00:08:47.087 10:05:20 -- accel/accel.sh@20 -- # read -r var val 00:08:48.465 10:05:21 -- accel/accel.sh@21 -- # val= 00:08:48.465 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.465 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:08:48.465 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:08:48.465 10:05:21 -- accel/accel.sh@21 -- # val= 00:08:48.465 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.465 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:08:48.465 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:08:48.465 10:05:21 -- accel/accel.sh@21 -- # val= 00:08:48.465 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.465 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:08:48.465 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:08:48.465 10:05:21 -- accel/accel.sh@21 -- # val= 00:08:48.465 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.465 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:08:48.465 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:08:48.465 10:05:21 -- accel/accel.sh@21 -- # val= 00:08:48.465 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.465 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:08:48.465 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:08:48.465 10:05:21 -- accel/accel.sh@21 -- # val= 00:08:48.465 10:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.465 10:05:21 -- accel/accel.sh@20 -- # IFS=: 00:08:48.465 10:05:21 -- accel/accel.sh@20 -- # read -r var val 00:08:48.465 10:05:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:48.465 10:05:21 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:08:48.465 10:05:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:48.465 00:08:48.465 real 0m2.816s 00:08:48.465 user 0m2.566s 00:08:48.465 sys 0m0.257s 00:08:48.465 10:05:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.465 10:05:21 -- common/autotest_common.sh@10 -- # set +x 00:08:48.465 ************************************ 00:08:48.465 END TEST accel_copy_crc32c 00:08:48.465 ************************************ 00:08:48.465 10:05:21 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:48.465 10:05:21 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:48.465 10:05:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:48.465 10:05:21 -- common/autotest_common.sh@10 -- # set +x 00:08:48.465 ************************************ 00:08:48.465 START TEST accel_copy_crc32c_C2 00:08:48.465 ************************************ 00:08:48.465 10:05:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:48.465 10:05:21 -- accel/accel.sh@16 -- # local accel_opc 00:08:48.465 10:05:21 -- accel/accel.sh@17 -- # local accel_module 00:08:48.465 10:05:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:48.465 10:05:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:48.465 10:05:21 -- accel/accel.sh@12 -- # build_accel_config 00:08:48.465 10:05:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:48.465 10:05:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:48.465 10:05:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:48.465 10:05:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:48.465 10:05:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:48.465 10:05:21 -- accel/accel.sh@41 -- # local IFS=, 00:08:48.465 10:05:21 -- accel/accel.sh@42 -- # jq -r . 00:08:48.465 [2024-04-17 10:05:21.637953] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:48.465 [2024-04-17 10:05:21.638029] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3285282 ] 00:08:48.465 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.466 [2024-04-17 10:05:21.720096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.725 [2024-04-17 10:05:21.805160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.104 10:05:23 -- accel/accel.sh@18 -- # out=' 00:08:50.104 SPDK Configuration: 00:08:50.104 Core mask: 0x1 00:08:50.104 00:08:50.104 Accel Perf Configuration: 00:08:50.104 Workload Type: copy_crc32c 00:08:50.104 CRC-32C seed: 0 00:08:50.104 Vector size: 4096 bytes 00:08:50.104 Transfer size: 8192 bytes 00:08:50.104 Vector count 2 00:08:50.104 Module: software 00:08:50.104 Queue depth: 32 00:08:50.104 Allocate depth: 32 00:08:50.104 # threads/core: 1 00:08:50.104 Run time: 1 seconds 00:08:50.104 Verify: Yes 00:08:50.104 00:08:50.104 Running for 1 seconds... 00:08:50.104 00:08:50.104 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:50.104 ------------------------------------------------------------------------------------ 00:08:50.104 0,0 144960/s 1132 MiB/s 0 0 00:08:50.104 ==================================================================================== 00:08:50.104 Total 144960/s 566 MiB/s 0 0' 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:50.104 10:05:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:50.104 10:05:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:50.104 10:05:23 -- accel/accel.sh@12 -- # build_accel_config 00:08:50.104 10:05:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:50.104 10:05:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:50.104 10:05:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:50.104 10:05:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:50.104 10:05:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:50.104 10:05:23 -- accel/accel.sh@41 -- # local IFS=, 00:08:50.104 10:05:23 -- accel/accel.sh@42 -- # jq -r . 00:08:50.104 [2024-04-17 10:05:23.044455] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:50.104 [2024-04-17 10:05:23.044529] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3285549 ] 00:08:50.104 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.104 [2024-04-17 10:05:23.126169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.104 [2024-04-17 10:05:23.209586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.104 10:05:23 -- accel/accel.sh@21 -- # val= 00:08:50.104 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:50.104 10:05:23 -- accel/accel.sh@21 -- # val= 00:08:50.104 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:50.104 10:05:23 -- accel/accel.sh@21 -- # val=0x1 00:08:50.104 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:50.104 10:05:23 -- accel/accel.sh@21 -- # val= 00:08:50.104 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:50.104 10:05:23 -- accel/accel.sh@21 -- # val= 00:08:50.104 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:50.104 10:05:23 -- accel/accel.sh@21 -- # val=copy_crc32c 00:08:50.104 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.104 10:05:23 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:50.104 10:05:23 -- accel/accel.sh@21 -- # val=0 00:08:50.104 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:50.104 10:05:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:50.104 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:50.104 10:05:23 -- accel/accel.sh@21 -- # val='8192 bytes' 00:08:50.104 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:50.104 10:05:23 -- accel/accel.sh@21 -- # val= 00:08:50.104 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:50.104 10:05:23 -- accel/accel.sh@21 -- # val=software 00:08:50.104 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.104 10:05:23 -- accel/accel.sh@23 -- # accel_module=software 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:50.104 10:05:23 -- accel/accel.sh@21 -- # val=32 00:08:50.104 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:50.104 10:05:23 -- accel/accel.sh@21 -- # val=32 00:08:50.104 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:50.104 10:05:23 -- accel/accel.sh@21 -- # val=1 00:08:50.104 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:50.104 10:05:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:50.104 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:50.104 10:05:23 -- accel/accel.sh@21 -- # val=Yes 00:08:50.104 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:50.104 10:05:23 -- accel/accel.sh@21 -- # val= 00:08:50.104 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:50.104 10:05:23 -- accel/accel.sh@21 -- # val= 00:08:50.104 10:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # IFS=: 00:08:50.104 10:05:23 -- accel/accel.sh@20 -- # read -r var val 00:08:51.483 10:05:24 -- accel/accel.sh@21 -- # val= 00:08:51.483 10:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:51.483 10:05:24 -- accel/accel.sh@20 -- # IFS=: 00:08:51.483 10:05:24 -- accel/accel.sh@20 -- # read -r var val 00:08:51.483 10:05:24 -- accel/accel.sh@21 -- # val= 00:08:51.483 10:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:51.483 10:05:24 -- accel/accel.sh@20 -- # IFS=: 00:08:51.483 10:05:24 -- accel/accel.sh@20 -- # read -r var val 00:08:51.483 10:05:24 -- accel/accel.sh@21 -- # val= 00:08:51.483 10:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:51.483 10:05:24 -- accel/accel.sh@20 -- # IFS=: 00:08:51.483 10:05:24 -- accel/accel.sh@20 -- # read -r var val 00:08:51.483 10:05:24 -- accel/accel.sh@21 -- # val= 00:08:51.483 10:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:51.483 10:05:24 -- accel/accel.sh@20 -- # IFS=: 00:08:51.483 10:05:24 -- accel/accel.sh@20 -- # read -r var val 00:08:51.483 10:05:24 -- accel/accel.sh@21 -- # val= 00:08:51.483 10:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:51.483 10:05:24 -- accel/accel.sh@20 -- # IFS=: 00:08:51.483 10:05:24 -- accel/accel.sh@20 -- # read -r var val 00:08:51.483 10:05:24 -- accel/accel.sh@21 -- # val= 00:08:51.483 10:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:51.483 10:05:24 -- accel/accel.sh@20 -- # IFS=: 00:08:51.483 10:05:24 -- accel/accel.sh@20 -- # read -r var val 00:08:51.483 10:05:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:51.483 10:05:24 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:08:51.483 10:05:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:51.483 00:08:51.483 real 0m2.817s 00:08:51.483 user 0m2.549s 00:08:51.483 sys 0m0.274s 00:08:51.483 10:05:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.483 10:05:24 -- common/autotest_common.sh@10 -- # set +x 00:08:51.483 ************************************ 00:08:51.483 END TEST accel_copy_crc32c_C2 00:08:51.483 ************************************ 00:08:51.483 10:05:24 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:51.483 10:05:24 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:51.483 10:05:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:51.483 10:05:24 -- common/autotest_common.sh@10 -- # set +x 00:08:51.483 ************************************ 00:08:51.483 START TEST accel_dualcast 00:08:51.483 ************************************ 00:08:51.483 10:05:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:08:51.483 10:05:24 -- accel/accel.sh@16 -- # local accel_opc 00:08:51.483 10:05:24 -- accel/accel.sh@17 -- # local accel_module 00:08:51.483 10:05:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:08:51.483 10:05:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:51.483 10:05:24 -- accel/accel.sh@12 -- # build_accel_config 00:08:51.483 10:05:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:51.483 10:05:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:51.483 10:05:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:51.483 10:05:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:51.483 10:05:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:51.483 10:05:24 -- accel/accel.sh@41 -- # local IFS=, 00:08:51.483 10:05:24 -- accel/accel.sh@42 -- # jq -r . 00:08:51.483 [2024-04-17 10:05:24.493848] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:51.483 [2024-04-17 10:05:24.493923] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3285838 ] 00:08:51.483 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.483 [2024-04-17 10:05:24.575872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.483 [2024-04-17 10:05:24.659373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.860 10:05:25 -- accel/accel.sh@18 -- # out=' 00:08:52.860 SPDK Configuration: 00:08:52.860 Core mask: 0x1 00:08:52.860 00:08:52.860 Accel Perf Configuration: 00:08:52.860 Workload Type: dualcast 00:08:52.860 Transfer size: 4096 bytes 00:08:52.861 Vector count 1 00:08:52.861 Module: software 00:08:52.861 Queue depth: 32 00:08:52.861 Allocate depth: 32 00:08:52.861 # threads/core: 1 00:08:52.861 Run time: 1 seconds 00:08:52.861 Verify: Yes 00:08:52.861 00:08:52.861 Running for 1 seconds... 00:08:52.861 00:08:52.861 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:52.861 ------------------------------------------------------------------------------------ 00:08:52.861 0,0 312288/s 1219 MiB/s 0 0 00:08:52.861 ==================================================================================== 00:08:52.861 Total 312288/s 1219 MiB/s 0 0' 00:08:52.861 10:05:25 -- accel/accel.sh@20 -- # IFS=: 00:08:52.861 10:05:25 -- accel/accel.sh@20 -- # read -r var val 00:08:52.861 10:05:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:52.861 10:05:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:52.861 10:05:25 -- accel/accel.sh@12 -- # build_accel_config 00:08:52.861 10:05:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:52.861 10:05:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:52.861 10:05:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:52.861 10:05:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:52.861 10:05:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:52.861 10:05:25 -- accel/accel.sh@41 -- # local IFS=, 00:08:52.861 10:05:25 -- accel/accel.sh@42 -- # jq -r . 00:08:52.861 [2024-04-17 10:05:25.898815] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:52.861 [2024-04-17 10:05:25.898887] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286104 ] 00:08:52.861 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.861 [2024-04-17 10:05:25.978738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.861 [2024-04-17 10:05:26.061321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.861 10:05:26 -- accel/accel.sh@21 -- # val= 00:08:52.861 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:08:52.861 10:05:26 -- accel/accel.sh@21 -- # val= 00:08:52.861 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:08:52.861 10:05:26 -- accel/accel.sh@21 -- # val=0x1 00:08:52.861 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:08:52.861 10:05:26 -- accel/accel.sh@21 -- # val= 00:08:52.861 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:08:52.861 10:05:26 -- accel/accel.sh@21 -- # val= 00:08:52.861 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:08:52.861 10:05:26 -- accel/accel.sh@21 -- # val=dualcast 00:08:52.861 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.861 10:05:26 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:08:52.861 10:05:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:52.861 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:08:52.861 10:05:26 -- accel/accel.sh@21 -- # val= 00:08:52.861 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:08:52.861 10:05:26 -- accel/accel.sh@21 -- # val=software 00:08:52.861 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.861 10:05:26 -- accel/accel.sh@23 -- # accel_module=software 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:08:52.861 10:05:26 -- accel/accel.sh@21 -- # val=32 00:08:52.861 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:08:52.861 10:05:26 -- accel/accel.sh@21 -- # val=32 00:08:52.861 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:08:52.861 10:05:26 -- accel/accel.sh@21 -- # val=1 00:08:52.861 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:08:52.861 10:05:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:52.861 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:08:52.861 10:05:26 -- accel/accel.sh@21 -- # val=Yes 00:08:52.861 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:08:52.861 10:05:26 -- accel/accel.sh@21 -- # val= 00:08:52.861 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:08:52.861 10:05:26 -- accel/accel.sh@21 -- # val= 00:08:52.861 10:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # IFS=: 00:08:52.861 10:05:26 -- accel/accel.sh@20 -- # read -r var val 00:08:54.238 10:05:27 -- accel/accel.sh@21 -- # val= 00:08:54.238 10:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.238 10:05:27 -- accel/accel.sh@20 -- # IFS=: 00:08:54.238 10:05:27 -- accel/accel.sh@20 -- # read -r var val 00:08:54.238 10:05:27 -- accel/accel.sh@21 -- # val= 00:08:54.238 10:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.238 10:05:27 -- accel/accel.sh@20 -- # IFS=: 00:08:54.238 10:05:27 -- accel/accel.sh@20 -- # read -r var val 00:08:54.238 10:05:27 -- accel/accel.sh@21 -- # val= 00:08:54.238 10:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.238 10:05:27 -- accel/accel.sh@20 -- # IFS=: 00:08:54.238 10:05:27 -- accel/accel.sh@20 -- # read -r var val 00:08:54.238 10:05:27 -- accel/accel.sh@21 -- # val= 00:08:54.238 10:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.238 10:05:27 -- accel/accel.sh@20 -- # IFS=: 00:08:54.238 10:05:27 -- accel/accel.sh@20 -- # read -r var val 00:08:54.238 10:05:27 -- accel/accel.sh@21 -- # val= 00:08:54.238 10:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.238 10:05:27 -- accel/accel.sh@20 -- # IFS=: 00:08:54.238 10:05:27 -- accel/accel.sh@20 -- # read -r var val 00:08:54.238 10:05:27 -- accel/accel.sh@21 -- # val= 00:08:54.238 10:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.238 10:05:27 -- accel/accel.sh@20 -- # IFS=: 00:08:54.238 10:05:27 -- accel/accel.sh@20 -- # read -r var val 00:08:54.238 10:05:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:54.238 10:05:27 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:08:54.238 10:05:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:54.238 00:08:54.238 real 0m2.812s 00:08:54.238 user 0m2.560s 00:08:54.238 sys 0m0.257s 00:08:54.238 10:05:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.238 10:05:27 -- common/autotest_common.sh@10 -- # set +x 00:08:54.238 ************************************ 00:08:54.238 END TEST accel_dualcast 00:08:54.238 ************************************ 00:08:54.238 10:05:27 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:54.238 10:05:27 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:54.238 10:05:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:54.238 10:05:27 -- common/autotest_common.sh@10 -- # set +x 00:08:54.238 ************************************ 00:08:54.238 START TEST accel_compare 00:08:54.238 ************************************ 00:08:54.238 10:05:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:08:54.238 10:05:27 -- accel/accel.sh@16 -- # local accel_opc 00:08:54.238 10:05:27 -- accel/accel.sh@17 -- # local accel_module 00:08:54.238 10:05:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:08:54.238 10:05:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:54.238 10:05:27 -- accel/accel.sh@12 -- # build_accel_config 00:08:54.238 10:05:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:54.238 10:05:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:54.238 10:05:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:54.238 10:05:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:54.238 10:05:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:54.238 10:05:27 -- accel/accel.sh@41 -- # local IFS=, 00:08:54.238 10:05:27 -- accel/accel.sh@42 -- # jq -r . 00:08:54.238 [2024-04-17 10:05:27.342679] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:54.238 [2024-04-17 10:05:27.342756] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286392 ] 00:08:54.238 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.238 [2024-04-17 10:05:27.423717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.238 [2024-04-17 10:05:27.508412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.615 10:05:28 -- accel/accel.sh@18 -- # out=' 00:08:55.615 SPDK Configuration: 00:08:55.615 Core mask: 0x1 00:08:55.615 00:08:55.615 Accel Perf Configuration: 00:08:55.615 Workload Type: compare 00:08:55.615 Transfer size: 4096 bytes 00:08:55.615 Vector count 1 00:08:55.615 Module: software 00:08:55.615 Queue depth: 32 00:08:55.615 Allocate depth: 32 00:08:55.615 # threads/core: 1 00:08:55.615 Run time: 1 seconds 00:08:55.615 Verify: Yes 00:08:55.615 00:08:55.615 Running for 1 seconds... 00:08:55.615 00:08:55.615 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:55.615 ------------------------------------------------------------------------------------ 00:08:55.615 0,0 381344/s 1489 MiB/s 0 0 00:08:55.615 ==================================================================================== 00:08:55.615 Total 381344/s 1489 MiB/s 0 0' 00:08:55.615 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:55.615 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:55.615 10:05:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:55.615 10:05:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:55.615 10:05:28 -- accel/accel.sh@12 -- # build_accel_config 00:08:55.615 10:05:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:55.615 10:05:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:55.615 10:05:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:55.615 10:05:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:55.615 10:05:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:55.615 10:05:28 -- accel/accel.sh@41 -- # local IFS=, 00:08:55.615 10:05:28 -- accel/accel.sh@42 -- # jq -r . 00:08:55.615 [2024-04-17 10:05:28.745180] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:55.615 [2024-04-17 10:05:28.745243] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286658 ] 00:08:55.615 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.615 [2024-04-17 10:05:28.824578] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.615 [2024-04-17 10:05:28.907076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.875 10:05:28 -- accel/accel.sh@21 -- # val= 00:08:55.875 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.875 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:55.875 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:55.875 10:05:28 -- accel/accel.sh@21 -- # val= 00:08:55.875 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.875 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:55.875 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:55.875 10:05:28 -- accel/accel.sh@21 -- # val=0x1 00:08:55.875 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.875 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:55.875 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:55.875 10:05:28 -- accel/accel.sh@21 -- # val= 00:08:55.875 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.875 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:55.875 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:55.875 10:05:28 -- accel/accel.sh@21 -- # val= 00:08:55.875 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.875 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:55.875 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:55.875 10:05:28 -- accel/accel.sh@21 -- # val=compare 00:08:55.875 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.875 10:05:28 -- accel/accel.sh@24 -- # accel_opc=compare 00:08:55.875 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:55.875 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:55.875 10:05:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:55.875 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.875 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:55.875 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:55.875 10:05:28 -- accel/accel.sh@21 -- # val= 00:08:55.875 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.875 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:55.875 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:55.875 10:05:28 -- accel/accel.sh@21 -- # val=software 00:08:55.875 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.875 10:05:28 -- accel/accel.sh@23 -- # accel_module=software 00:08:55.875 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:55.875 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:55.875 10:05:28 -- accel/accel.sh@21 -- # val=32 00:08:55.875 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.875 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:55.875 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:55.875 10:05:28 -- accel/accel.sh@21 -- # val=32 00:08:55.875 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.875 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:55.875 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:55.875 10:05:28 -- accel/accel.sh@21 -- # val=1 00:08:55.875 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.876 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:55.876 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:55.876 10:05:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:55.876 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.876 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:55.876 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:55.876 10:05:28 -- accel/accel.sh@21 -- # val=Yes 00:08:55.876 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.876 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:55.876 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:55.876 10:05:28 -- accel/accel.sh@21 -- # val= 00:08:55.876 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.876 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:55.876 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:55.876 10:05:28 -- accel/accel.sh@21 -- # val= 00:08:55.876 10:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.876 10:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:55.876 10:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:56.812 10:05:30 -- accel/accel.sh@21 -- # val= 00:08:56.812 10:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.812 10:05:30 -- accel/accel.sh@20 -- # IFS=: 00:08:56.812 10:05:30 -- accel/accel.sh@20 -- # read -r var val 00:08:56.812 10:05:30 -- accel/accel.sh@21 -- # val= 00:08:56.812 10:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.812 10:05:30 -- accel/accel.sh@20 -- # IFS=: 00:08:56.812 10:05:30 -- accel/accel.sh@20 -- # read -r var val 00:08:56.812 10:05:30 -- accel/accel.sh@21 -- # val= 00:08:56.812 10:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.812 10:05:30 -- accel/accel.sh@20 -- # IFS=: 00:08:56.812 10:05:30 -- accel/accel.sh@20 -- # read -r var val 00:08:56.812 10:05:30 -- accel/accel.sh@21 -- # val= 00:08:56.812 10:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.812 10:05:30 -- accel/accel.sh@20 -- # IFS=: 00:08:56.812 10:05:30 -- accel/accel.sh@20 -- # read -r var val 00:08:56.812 10:05:30 -- accel/accel.sh@21 -- # val= 00:08:56.812 10:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.812 10:05:30 -- accel/accel.sh@20 -- # IFS=: 00:08:56.812 10:05:30 -- accel/accel.sh@20 -- # read -r var val 00:08:56.812 10:05:30 -- accel/accel.sh@21 -- # val= 00:08:56.812 10:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.812 10:05:30 -- accel/accel.sh@20 -- # IFS=: 00:08:56.812 10:05:30 -- accel/accel.sh@20 -- # read -r var val 00:08:56.812 10:05:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:56.812 10:05:30 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:08:56.812 10:05:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:56.812 00:08:56.812 real 0m2.811s 00:08:56.812 user 0m2.570s 00:08:56.812 sys 0m0.245s 00:08:56.812 10:05:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.812 10:05:30 -- common/autotest_common.sh@10 -- # set +x 00:08:56.812 ************************************ 00:08:56.812 END TEST accel_compare 00:08:56.812 ************************************ 00:08:57.071 10:05:30 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:57.071 10:05:30 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:57.071 10:05:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:57.071 10:05:30 -- common/autotest_common.sh@10 -- # set +x 00:08:57.071 ************************************ 00:08:57.071 START TEST accel_xor 00:08:57.071 ************************************ 00:08:57.071 10:05:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:08:57.071 10:05:30 -- accel/accel.sh@16 -- # local accel_opc 00:08:57.072 10:05:30 -- accel/accel.sh@17 -- # local accel_module 00:08:57.072 10:05:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:08:57.072 10:05:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:57.072 10:05:30 -- accel/accel.sh@12 -- # build_accel_config 00:08:57.072 10:05:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:57.072 10:05:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:57.072 10:05:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:57.072 10:05:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:57.072 10:05:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:57.072 10:05:30 -- accel/accel.sh@41 -- # local IFS=, 00:08:57.072 10:05:30 -- accel/accel.sh@42 -- # jq -r . 00:08:57.072 [2024-04-17 10:05:30.191015] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:57.072 [2024-04-17 10:05:30.191072] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286937 ] 00:08:57.072 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.072 [2024-04-17 10:05:30.271674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.072 [2024-04-17 10:05:30.355648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.449 10:05:31 -- accel/accel.sh@18 -- # out=' 00:08:58.449 SPDK Configuration: 00:08:58.449 Core mask: 0x1 00:08:58.449 00:08:58.449 Accel Perf Configuration: 00:08:58.449 Workload Type: xor 00:08:58.449 Source buffers: 2 00:08:58.449 Transfer size: 4096 bytes 00:08:58.449 Vector count 1 00:08:58.449 Module: software 00:08:58.449 Queue depth: 32 00:08:58.449 Allocate depth: 32 00:08:58.449 # threads/core: 1 00:08:58.449 Run time: 1 seconds 00:08:58.449 Verify: Yes 00:08:58.449 00:08:58.449 Running for 1 seconds... 00:08:58.449 00:08:58.449 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:58.449 ------------------------------------------------------------------------------------ 00:08:58.449 0,0 310688/s 1213 MiB/s 0 0 00:08:58.449 ==================================================================================== 00:08:58.449 Total 310688/s 1213 MiB/s 0 0' 00:08:58.449 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:08:58.449 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:08:58.449 10:05:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:58.449 10:05:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:58.449 10:05:31 -- accel/accel.sh@12 -- # build_accel_config 00:08:58.449 10:05:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:58.449 10:05:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:58.449 10:05:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:58.449 10:05:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:58.449 10:05:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:58.449 10:05:31 -- accel/accel.sh@41 -- # local IFS=, 00:08:58.449 10:05:31 -- accel/accel.sh@42 -- # jq -r . 00:08:58.449 [2024-04-17 10:05:31.592422] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:58.449 [2024-04-17 10:05:31.592487] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3287210 ] 00:08:58.449 EAL: No free 2048 kB hugepages reported on node 1 00:08:58.450 [2024-04-17 10:05:31.671377] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.450 [2024-04-17 10:05:31.753957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.709 10:05:31 -- accel/accel.sh@21 -- # val= 00:08:58.709 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:08:58.709 10:05:31 -- accel/accel.sh@21 -- # val= 00:08:58.709 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:08:58.709 10:05:31 -- accel/accel.sh@21 -- # val=0x1 00:08:58.709 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:08:58.709 10:05:31 -- accel/accel.sh@21 -- # val= 00:08:58.709 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:08:58.709 10:05:31 -- accel/accel.sh@21 -- # val= 00:08:58.709 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:08:58.709 10:05:31 -- accel/accel.sh@21 -- # val=xor 00:08:58.709 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.709 10:05:31 -- accel/accel.sh@24 -- # accel_opc=xor 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:08:58.709 10:05:31 -- accel/accel.sh@21 -- # val=2 00:08:58.709 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:08:58.709 10:05:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:58.709 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:08:58.709 10:05:31 -- accel/accel.sh@21 -- # val= 00:08:58.709 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:08:58.709 10:05:31 -- accel/accel.sh@21 -- # val=software 00:08:58.709 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.709 10:05:31 -- accel/accel.sh@23 -- # accel_module=software 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:08:58.709 10:05:31 -- accel/accel.sh@21 -- # val=32 00:08:58.709 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:08:58.709 10:05:31 -- accel/accel.sh@21 -- # val=32 00:08:58.709 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:08:58.709 10:05:31 -- accel/accel.sh@21 -- # val=1 00:08:58.709 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:08:58.709 10:05:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:58.709 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:08:58.709 10:05:31 -- accel/accel.sh@21 -- # val=Yes 00:08:58.709 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:08:58.709 10:05:31 -- accel/accel.sh@21 -- # val= 00:08:58.709 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:08:58.709 10:05:31 -- accel/accel.sh@21 -- # val= 00:08:58.709 10:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # IFS=: 00:08:58.709 10:05:31 -- accel/accel.sh@20 -- # read -r var val 00:08:59.646 10:05:32 -- accel/accel.sh@21 -- # val= 00:08:59.646 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.646 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:08:59.646 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:08:59.646 10:05:32 -- accel/accel.sh@21 -- # val= 00:08:59.646 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.646 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:08:59.646 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:08:59.646 10:05:32 -- accel/accel.sh@21 -- # val= 00:08:59.646 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.646 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:08:59.646 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:08:59.646 10:05:32 -- accel/accel.sh@21 -- # val= 00:08:59.646 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.646 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:08:59.646 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:08:59.646 10:05:32 -- accel/accel.sh@21 -- # val= 00:08:59.646 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.646 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:08:59.646 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:08:59.646 10:05:32 -- accel/accel.sh@21 -- # val= 00:08:59.646 10:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:59.646 10:05:32 -- accel/accel.sh@20 -- # IFS=: 00:08:59.646 10:05:32 -- accel/accel.sh@20 -- # read -r var val 00:08:59.646 10:05:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:59.646 10:05:32 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:08:59.646 10:05:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:59.646 00:08:59.646 real 0m2.805s 00:08:59.646 user 0m2.552s 00:08:59.646 sys 0m0.258s 00:08:59.646 10:05:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.646 10:05:32 -- common/autotest_common.sh@10 -- # set +x 00:08:59.646 ************************************ 00:08:59.646 END TEST accel_xor 00:08:59.646 ************************************ 00:08:59.905 10:05:33 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:59.905 10:05:33 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:59.905 10:05:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:59.905 10:05:33 -- common/autotest_common.sh@10 -- # set +x 00:08:59.905 ************************************ 00:08:59.905 START TEST accel_xor 00:08:59.905 ************************************ 00:08:59.905 10:05:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:08:59.905 10:05:33 -- accel/accel.sh@16 -- # local accel_opc 00:08:59.905 10:05:33 -- accel/accel.sh@17 -- # local accel_module 00:08:59.905 10:05:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:08:59.905 10:05:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:59.905 10:05:33 -- accel/accel.sh@12 -- # build_accel_config 00:08:59.905 10:05:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:59.905 10:05:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:59.905 10:05:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:59.905 10:05:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:59.905 10:05:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:59.905 10:05:33 -- accel/accel.sh@41 -- # local IFS=, 00:08:59.905 10:05:33 -- accel/accel.sh@42 -- # jq -r . 00:08:59.905 [2024-04-17 10:05:33.035597] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:59.905 [2024-04-17 10:05:33.035678] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3287491 ] 00:08:59.905 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.905 [2024-04-17 10:05:33.116078] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.905 [2024-04-17 10:05:33.199345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.282 10:05:34 -- accel/accel.sh@18 -- # out=' 00:09:01.282 SPDK Configuration: 00:09:01.282 Core mask: 0x1 00:09:01.282 00:09:01.282 Accel Perf Configuration: 00:09:01.283 Workload Type: xor 00:09:01.283 Source buffers: 3 00:09:01.283 Transfer size: 4096 bytes 00:09:01.283 Vector count 1 00:09:01.283 Module: software 00:09:01.283 Queue depth: 32 00:09:01.283 Allocate depth: 32 00:09:01.283 # threads/core: 1 00:09:01.283 Run time: 1 seconds 00:09:01.283 Verify: Yes 00:09:01.283 00:09:01.283 Running for 1 seconds... 00:09:01.283 00:09:01.283 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:01.283 ------------------------------------------------------------------------------------ 00:09:01.283 0,0 295552/s 1154 MiB/s 0 0 00:09:01.283 ==================================================================================== 00:09:01.283 Total 295552/s 1154 MiB/s 0 0' 00:09:01.283 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:09:01.283 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:09:01.283 10:05:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:09:01.283 10:05:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:09:01.283 10:05:34 -- accel/accel.sh@12 -- # build_accel_config 00:09:01.283 10:05:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:01.283 10:05:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:01.283 10:05:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:01.283 10:05:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:01.283 10:05:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:01.283 10:05:34 -- accel/accel.sh@41 -- # local IFS=, 00:09:01.283 10:05:34 -- accel/accel.sh@42 -- # jq -r . 00:09:01.283 [2024-04-17 10:05:34.435072] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:01.283 [2024-04-17 10:05:34.435131] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3287761 ] 00:09:01.283 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.283 [2024-04-17 10:05:34.515407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.283 [2024-04-17 10:05:34.598001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.542 10:05:34 -- accel/accel.sh@21 -- # val= 00:09:01.542 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:09:01.542 10:05:34 -- accel/accel.sh@21 -- # val= 00:09:01.542 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:09:01.542 10:05:34 -- accel/accel.sh@21 -- # val=0x1 00:09:01.542 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:09:01.542 10:05:34 -- accel/accel.sh@21 -- # val= 00:09:01.542 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:09:01.542 10:05:34 -- accel/accel.sh@21 -- # val= 00:09:01.542 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:09:01.542 10:05:34 -- accel/accel.sh@21 -- # val=xor 00:09:01.542 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:09:01.542 10:05:34 -- accel/accel.sh@24 -- # accel_opc=xor 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:09:01.542 10:05:34 -- accel/accel.sh@21 -- # val=3 00:09:01.542 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:09:01.542 10:05:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:01.542 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:09:01.542 10:05:34 -- accel/accel.sh@21 -- # val= 00:09:01.542 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:09:01.542 10:05:34 -- accel/accel.sh@21 -- # val=software 00:09:01.542 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:09:01.542 10:05:34 -- accel/accel.sh@23 -- # accel_module=software 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:09:01.542 10:05:34 -- accel/accel.sh@21 -- # val=32 00:09:01.542 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:09:01.542 10:05:34 -- accel/accel.sh@21 -- # val=32 00:09:01.542 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:09:01.542 10:05:34 -- accel/accel.sh@21 -- # val=1 00:09:01.542 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:09:01.542 10:05:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:01.542 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:09:01.542 10:05:34 -- accel/accel.sh@21 -- # val=Yes 00:09:01.542 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:09:01.542 10:05:34 -- accel/accel.sh@21 -- # val= 00:09:01.542 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:09:01.542 10:05:34 -- accel/accel.sh@21 -- # val= 00:09:01.542 10:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # IFS=: 00:09:01.542 10:05:34 -- accel/accel.sh@20 -- # read -r var val 00:09:02.479 10:05:35 -- accel/accel.sh@21 -- # val= 00:09:02.479 10:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.479 10:05:35 -- accel/accel.sh@20 -- # IFS=: 00:09:02.479 10:05:35 -- accel/accel.sh@20 -- # read -r var val 00:09:02.479 10:05:35 -- accel/accel.sh@21 -- # val= 00:09:02.479 10:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.479 10:05:35 -- accel/accel.sh@20 -- # IFS=: 00:09:02.479 10:05:35 -- accel/accel.sh@20 -- # read -r var val 00:09:02.479 10:05:35 -- accel/accel.sh@21 -- # val= 00:09:02.479 10:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.479 10:05:35 -- accel/accel.sh@20 -- # IFS=: 00:09:02.479 10:05:35 -- accel/accel.sh@20 -- # read -r var val 00:09:02.479 10:05:35 -- accel/accel.sh@21 -- # val= 00:09:02.479 10:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.479 10:05:35 -- accel/accel.sh@20 -- # IFS=: 00:09:02.738 10:05:35 -- accel/accel.sh@20 -- # read -r var val 00:09:02.738 10:05:35 -- accel/accel.sh@21 -- # val= 00:09:02.738 10:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.738 10:05:35 -- accel/accel.sh@20 -- # IFS=: 00:09:02.738 10:05:35 -- accel/accel.sh@20 -- # read -r var val 00:09:02.738 10:05:35 -- accel/accel.sh@21 -- # val= 00:09:02.738 10:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.738 10:05:35 -- accel/accel.sh@20 -- # IFS=: 00:09:02.738 10:05:35 -- accel/accel.sh@20 -- # read -r var val 00:09:02.738 10:05:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:02.738 10:05:35 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:09:02.738 10:05:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:02.738 00:09:02.738 real 0m2.805s 00:09:02.738 user 0m2.545s 00:09:02.738 sys 0m0.265s 00:09:02.739 10:05:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.739 10:05:35 -- common/autotest_common.sh@10 -- # set +x 00:09:02.739 ************************************ 00:09:02.739 END TEST accel_xor 00:09:02.739 ************************************ 00:09:02.739 10:05:35 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:09:02.739 10:05:35 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:09:02.739 10:05:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:02.739 10:05:35 -- common/autotest_common.sh@10 -- # set +x 00:09:02.739 ************************************ 00:09:02.739 START TEST accel_dif_verify 00:09:02.739 ************************************ 00:09:02.739 10:05:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:09:02.739 10:05:35 -- accel/accel.sh@16 -- # local accel_opc 00:09:02.739 10:05:35 -- accel/accel.sh@17 -- # local accel_module 00:09:02.739 10:05:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:09:02.739 10:05:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:09:02.739 10:05:35 -- accel/accel.sh@12 -- # build_accel_config 00:09:02.739 10:05:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:02.739 10:05:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:02.739 10:05:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:02.739 10:05:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:02.739 10:05:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:02.739 10:05:35 -- accel/accel.sh@41 -- # local IFS=, 00:09:02.739 10:05:35 -- accel/accel.sh@42 -- # jq -r . 00:09:02.739 [2024-04-17 10:05:35.880268] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:02.739 [2024-04-17 10:05:35.880340] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3288045 ] 00:09:02.739 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.739 [2024-04-17 10:05:35.961355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.739 [2024-04-17 10:05:36.045043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.117 10:05:37 -- accel/accel.sh@18 -- # out=' 00:09:04.117 SPDK Configuration: 00:09:04.117 Core mask: 0x1 00:09:04.117 00:09:04.117 Accel Perf Configuration: 00:09:04.117 Workload Type: dif_verify 00:09:04.117 Vector size: 4096 bytes 00:09:04.117 Transfer size: 4096 bytes 00:09:04.117 Block size: 512 bytes 00:09:04.117 Metadata size: 8 bytes 00:09:04.117 Vector count 1 00:09:04.117 Module: software 00:09:04.117 Queue depth: 32 00:09:04.117 Allocate depth: 32 00:09:04.117 # threads/core: 1 00:09:04.117 Run time: 1 seconds 00:09:04.117 Verify: No 00:09:04.117 00:09:04.117 Running for 1 seconds... 00:09:04.117 00:09:04.117 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:04.117 ------------------------------------------------------------------------------------ 00:09:04.117 0,0 81312/s 322 MiB/s 0 0 00:09:04.117 ==================================================================================== 00:09:04.117 Total 81312/s 317 MiB/s 0 0' 00:09:04.117 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:09:04.117 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:09:04.117 10:05:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:09:04.117 10:05:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:09:04.117 10:05:37 -- accel/accel.sh@12 -- # build_accel_config 00:09:04.117 10:05:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:04.117 10:05:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:04.117 10:05:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:04.117 10:05:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:04.117 10:05:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:04.117 10:05:37 -- accel/accel.sh@41 -- # local IFS=, 00:09:04.117 10:05:37 -- accel/accel.sh@42 -- # jq -r . 00:09:04.117 [2024-04-17 10:05:37.281456] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:04.117 [2024-04-17 10:05:37.281514] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3288310 ] 00:09:04.117 EAL: No free 2048 kB hugepages reported on node 1 00:09:04.117 [2024-04-17 10:05:37.361785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.117 [2024-04-17 10:05:37.444879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.376 10:05:37 -- accel/accel.sh@21 -- # val= 00:09:04.376 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:09:04.376 10:05:37 -- accel/accel.sh@21 -- # val= 00:09:04.376 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:09:04.376 10:05:37 -- accel/accel.sh@21 -- # val=0x1 00:09:04.376 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:09:04.376 10:05:37 -- accel/accel.sh@21 -- # val= 00:09:04.376 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:09:04.376 10:05:37 -- accel/accel.sh@21 -- # val= 00:09:04.376 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:09:04.376 10:05:37 -- accel/accel.sh@21 -- # val=dif_verify 00:09:04.376 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.376 10:05:37 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:09:04.376 10:05:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:04.376 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:09:04.376 10:05:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:04.376 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:09:04.376 10:05:37 -- accel/accel.sh@21 -- # val='512 bytes' 00:09:04.376 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:09:04.376 10:05:37 -- accel/accel.sh@21 -- # val='8 bytes' 00:09:04.376 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:09:04.376 10:05:37 -- accel/accel.sh@21 -- # val= 00:09:04.376 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:09:04.376 10:05:37 -- accel/accel.sh@21 -- # val=software 00:09:04.376 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.376 10:05:37 -- accel/accel.sh@23 -- # accel_module=software 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:09:04.376 10:05:37 -- accel/accel.sh@21 -- # val=32 00:09:04.376 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:09:04.376 10:05:37 -- accel/accel.sh@21 -- # val=32 00:09:04.376 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:09:04.376 10:05:37 -- accel/accel.sh@21 -- # val=1 00:09:04.376 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:09:04.376 10:05:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:04.376 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:09:04.376 10:05:37 -- accel/accel.sh@21 -- # val=No 00:09:04.376 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:09:04.376 10:05:37 -- accel/accel.sh@21 -- # val= 00:09:04.376 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:09:04.376 10:05:37 -- accel/accel.sh@21 -- # val= 00:09:04.376 10:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # IFS=: 00:09:04.376 10:05:37 -- accel/accel.sh@20 -- # read -r var val 00:09:05.783 10:05:38 -- accel/accel.sh@21 -- # val= 00:09:05.783 10:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.783 10:05:38 -- accel/accel.sh@20 -- # IFS=: 00:09:05.783 10:05:38 -- accel/accel.sh@20 -- # read -r var val 00:09:05.783 10:05:38 -- accel/accel.sh@21 -- # val= 00:09:05.783 10:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.783 10:05:38 -- accel/accel.sh@20 -- # IFS=: 00:09:05.783 10:05:38 -- accel/accel.sh@20 -- # read -r var val 00:09:05.783 10:05:38 -- accel/accel.sh@21 -- # val= 00:09:05.783 10:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.783 10:05:38 -- accel/accel.sh@20 -- # IFS=: 00:09:05.783 10:05:38 -- accel/accel.sh@20 -- # read -r var val 00:09:05.783 10:05:38 -- accel/accel.sh@21 -- # val= 00:09:05.783 10:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.783 10:05:38 -- accel/accel.sh@20 -- # IFS=: 00:09:05.783 10:05:38 -- accel/accel.sh@20 -- # read -r var val 00:09:05.783 10:05:38 -- accel/accel.sh@21 -- # val= 00:09:05.783 10:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.783 10:05:38 -- accel/accel.sh@20 -- # IFS=: 00:09:05.783 10:05:38 -- accel/accel.sh@20 -- # read -r var val 00:09:05.783 10:05:38 -- accel/accel.sh@21 -- # val= 00:09:05.783 10:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.783 10:05:38 -- accel/accel.sh@20 -- # IFS=: 00:09:05.783 10:05:38 -- accel/accel.sh@20 -- # read -r var val 00:09:05.783 10:05:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:05.783 10:05:38 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:09:05.783 10:05:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:05.783 00:09:05.783 real 0m2.809s 00:09:05.783 user 0m2.551s 00:09:05.783 sys 0m0.265s 00:09:05.783 10:05:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.783 10:05:38 -- common/autotest_common.sh@10 -- # set +x 00:09:05.783 ************************************ 00:09:05.783 END TEST accel_dif_verify 00:09:05.783 ************************************ 00:09:05.783 10:05:38 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:09:05.783 10:05:38 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:09:05.783 10:05:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:05.783 10:05:38 -- common/autotest_common.sh@10 -- # set +x 00:09:05.783 ************************************ 00:09:05.784 START TEST accel_dif_generate 00:09:05.784 ************************************ 00:09:05.784 10:05:38 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:09:05.784 10:05:38 -- accel/accel.sh@16 -- # local accel_opc 00:09:05.784 10:05:38 -- accel/accel.sh@17 -- # local accel_module 00:09:05.784 10:05:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:09:05.784 10:05:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:09:05.784 10:05:38 -- accel/accel.sh@12 -- # build_accel_config 00:09:05.784 10:05:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:05.784 10:05:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:05.784 10:05:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:05.784 10:05:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:05.784 10:05:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:05.784 10:05:38 -- accel/accel.sh@41 -- # local IFS=, 00:09:05.784 10:05:38 -- accel/accel.sh@42 -- # jq -r . 00:09:05.784 [2024-04-17 10:05:38.728297] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:05.784 [2024-04-17 10:05:38.728370] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3288597 ] 00:09:05.784 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.784 [2024-04-17 10:05:38.808922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.784 [2024-04-17 10:05:38.893222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.193 10:05:40 -- accel/accel.sh@18 -- # out=' 00:09:07.193 SPDK Configuration: 00:09:07.193 Core mask: 0x1 00:09:07.193 00:09:07.193 Accel Perf Configuration: 00:09:07.193 Workload Type: dif_generate 00:09:07.193 Vector size: 4096 bytes 00:09:07.193 Transfer size: 4096 bytes 00:09:07.193 Block size: 512 bytes 00:09:07.193 Metadata size: 8 bytes 00:09:07.193 Vector count 1 00:09:07.193 Module: software 00:09:07.193 Queue depth: 32 00:09:07.193 Allocate depth: 32 00:09:07.193 # threads/core: 1 00:09:07.193 Run time: 1 seconds 00:09:07.193 Verify: No 00:09:07.193 00:09:07.193 Running for 1 seconds... 00:09:07.193 00:09:07.193 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:07.193 ------------------------------------------------------------------------------------ 00:09:07.193 0,0 97344/s 386 MiB/s 0 0 00:09:07.193 ==================================================================================== 00:09:07.193 Total 97344/s 380 MiB/s 0 0' 00:09:07.193 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:09:07.193 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:09:07.193 10:05:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:09:07.193 10:05:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:09:07.193 10:05:40 -- accel/accel.sh@12 -- # build_accel_config 00:09:07.193 10:05:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:07.193 10:05:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:07.193 10:05:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:07.193 10:05:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:07.194 10:05:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:07.194 10:05:40 -- accel/accel.sh@41 -- # local IFS=, 00:09:07.194 10:05:40 -- accel/accel.sh@42 -- # jq -r . 00:09:07.194 [2024-04-17 10:05:40.136203] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:07.194 [2024-04-17 10:05:40.136279] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3288867 ] 00:09:07.194 EAL: No free 2048 kB hugepages reported on node 1 00:09:07.194 [2024-04-17 10:05:40.216048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.194 [2024-04-17 10:05:40.300109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.194 10:05:40 -- accel/accel.sh@21 -- # val= 00:09:07.194 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:09:07.194 10:05:40 -- accel/accel.sh@21 -- # val= 00:09:07.194 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:09:07.194 10:05:40 -- accel/accel.sh@21 -- # val=0x1 00:09:07.194 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:09:07.194 10:05:40 -- accel/accel.sh@21 -- # val= 00:09:07.194 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:09:07.194 10:05:40 -- accel/accel.sh@21 -- # val= 00:09:07.194 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:09:07.194 10:05:40 -- accel/accel.sh@21 -- # val=dif_generate 00:09:07.194 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.194 10:05:40 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:09:07.194 10:05:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:07.194 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:09:07.194 10:05:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:07.194 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:09:07.194 10:05:40 -- accel/accel.sh@21 -- # val='512 bytes' 00:09:07.194 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:09:07.194 10:05:40 -- accel/accel.sh@21 -- # val='8 bytes' 00:09:07.194 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:09:07.194 10:05:40 -- accel/accel.sh@21 -- # val= 00:09:07.194 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:09:07.194 10:05:40 -- accel/accel.sh@21 -- # val=software 00:09:07.194 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.194 10:05:40 -- accel/accel.sh@23 -- # accel_module=software 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:09:07.194 10:05:40 -- accel/accel.sh@21 -- # val=32 00:09:07.194 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:09:07.194 10:05:40 -- accel/accel.sh@21 -- # val=32 00:09:07.194 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:09:07.194 10:05:40 -- accel/accel.sh@21 -- # val=1 00:09:07.194 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:09:07.194 10:05:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:07.194 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:09:07.194 10:05:40 -- accel/accel.sh@21 -- # val=No 00:09:07.194 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:09:07.194 10:05:40 -- accel/accel.sh@21 -- # val= 00:09:07.194 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:09:07.194 10:05:40 -- accel/accel.sh@21 -- # val= 00:09:07.194 10:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # IFS=: 00:09:07.194 10:05:40 -- accel/accel.sh@20 -- # read -r var val 00:09:08.571 10:05:41 -- accel/accel.sh@21 -- # val= 00:09:08.571 10:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.571 10:05:41 -- accel/accel.sh@20 -- # IFS=: 00:09:08.571 10:05:41 -- accel/accel.sh@20 -- # read -r var val 00:09:08.571 10:05:41 -- accel/accel.sh@21 -- # val= 00:09:08.571 10:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.571 10:05:41 -- accel/accel.sh@20 -- # IFS=: 00:09:08.571 10:05:41 -- accel/accel.sh@20 -- # read -r var val 00:09:08.571 10:05:41 -- accel/accel.sh@21 -- # val= 00:09:08.571 10:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.571 10:05:41 -- accel/accel.sh@20 -- # IFS=: 00:09:08.571 10:05:41 -- accel/accel.sh@20 -- # read -r var val 00:09:08.571 10:05:41 -- accel/accel.sh@21 -- # val= 00:09:08.571 10:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.571 10:05:41 -- accel/accel.sh@20 -- # IFS=: 00:09:08.571 10:05:41 -- accel/accel.sh@20 -- # read -r var val 00:09:08.571 10:05:41 -- accel/accel.sh@21 -- # val= 00:09:08.571 10:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.571 10:05:41 -- accel/accel.sh@20 -- # IFS=: 00:09:08.571 10:05:41 -- accel/accel.sh@20 -- # read -r var val 00:09:08.571 10:05:41 -- accel/accel.sh@21 -- # val= 00:09:08.571 10:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.571 10:05:41 -- accel/accel.sh@20 -- # IFS=: 00:09:08.571 10:05:41 -- accel/accel.sh@20 -- # read -r var val 00:09:08.571 10:05:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:08.571 10:05:41 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:09:08.571 10:05:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:08.571 00:09:08.571 real 0m2.816s 00:09:08.571 user 0m2.561s 00:09:08.571 sys 0m0.262s 00:09:08.571 10:05:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.571 10:05:41 -- common/autotest_common.sh@10 -- # set +x 00:09:08.571 ************************************ 00:09:08.571 END TEST accel_dif_generate 00:09:08.571 ************************************ 00:09:08.571 10:05:41 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:09:08.571 10:05:41 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:09:08.571 10:05:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:08.571 10:05:41 -- common/autotest_common.sh@10 -- # set +x 00:09:08.571 ************************************ 00:09:08.571 START TEST accel_dif_generate_copy 00:09:08.571 ************************************ 00:09:08.571 10:05:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:09:08.571 10:05:41 -- accel/accel.sh@16 -- # local accel_opc 00:09:08.571 10:05:41 -- accel/accel.sh@17 -- # local accel_module 00:09:08.571 10:05:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:09:08.571 10:05:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:09:08.571 10:05:41 -- accel/accel.sh@12 -- # build_accel_config 00:09:08.571 10:05:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:08.571 10:05:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:08.571 10:05:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:08.571 10:05:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:08.571 10:05:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:08.571 10:05:41 -- accel/accel.sh@41 -- # local IFS=, 00:09:08.571 10:05:41 -- accel/accel.sh@42 -- # jq -r . 00:09:08.571 [2024-04-17 10:05:41.580469] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:08.571 [2024-04-17 10:05:41.580527] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3289155 ] 00:09:08.571 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.571 [2024-04-17 10:05:41.660079] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.571 [2024-04-17 10:05:41.743624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.949 10:05:42 -- accel/accel.sh@18 -- # out=' 00:09:09.949 SPDK Configuration: 00:09:09.949 Core mask: 0x1 00:09:09.949 00:09:09.949 Accel Perf Configuration: 00:09:09.949 Workload Type: dif_generate_copy 00:09:09.949 Vector size: 4096 bytes 00:09:09.949 Transfer size: 4096 bytes 00:09:09.949 Vector count 1 00:09:09.949 Module: software 00:09:09.949 Queue depth: 32 00:09:09.949 Allocate depth: 32 00:09:09.949 # threads/core: 1 00:09:09.949 Run time: 1 seconds 00:09:09.949 Verify: No 00:09:09.949 00:09:09.949 Running for 1 seconds... 00:09:09.949 00:09:09.949 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:09.949 ------------------------------------------------------------------------------------ 00:09:09.949 0,0 75872/s 301 MiB/s 0 0 00:09:09.949 ==================================================================================== 00:09:09.949 Total 75872/s 296 MiB/s 0 0' 00:09:09.949 10:05:42 -- accel/accel.sh@20 -- # IFS=: 00:09:09.949 10:05:42 -- accel/accel.sh@20 -- # read -r var val 00:09:09.949 10:05:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:09:09.949 10:05:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:09:09.949 10:05:42 -- accel/accel.sh@12 -- # build_accel_config 00:09:09.949 10:05:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:09.949 10:05:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:09.949 10:05:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:09.949 10:05:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:09.949 10:05:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:09.949 10:05:42 -- accel/accel.sh@41 -- # local IFS=, 00:09:09.949 10:05:42 -- accel/accel.sh@42 -- # jq -r . 00:09:09.949 [2024-04-17 10:05:42.980790] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:09.949 [2024-04-17 10:05:42.980850] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3289419 ] 00:09:09.949 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.949 [2024-04-17 10:05:43.061167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.949 [2024-04-17 10:05:43.143960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.949 10:05:43 -- accel/accel.sh@21 -- # val= 00:09:09.949 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:09:09.949 10:05:43 -- accel/accel.sh@21 -- # val= 00:09:09.949 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:09:09.949 10:05:43 -- accel/accel.sh@21 -- # val=0x1 00:09:09.949 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:09:09.949 10:05:43 -- accel/accel.sh@21 -- # val= 00:09:09.949 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:09:09.949 10:05:43 -- accel/accel.sh@21 -- # val= 00:09:09.949 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:09:09.949 10:05:43 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:09:09.949 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.949 10:05:43 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:09:09.949 10:05:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:09.949 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:09:09.949 10:05:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:09.949 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:09:09.949 10:05:43 -- accel/accel.sh@21 -- # val= 00:09:09.949 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:09:09.949 10:05:43 -- accel/accel.sh@21 -- # val=software 00:09:09.949 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.949 10:05:43 -- accel/accel.sh@23 -- # accel_module=software 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:09:09.949 10:05:43 -- accel/accel.sh@21 -- # val=32 00:09:09.949 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:09:09.949 10:05:43 -- accel/accel.sh@21 -- # val=32 00:09:09.949 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:09:09.949 10:05:43 -- accel/accel.sh@21 -- # val=1 00:09:09.949 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:09:09.949 10:05:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:09.949 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:09:09.949 10:05:43 -- accel/accel.sh@21 -- # val=No 00:09:09.949 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:09:09.949 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:09:09.949 10:05:43 -- accel/accel.sh@21 -- # val= 00:09:09.949 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.950 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:09:09.950 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:09:09.950 10:05:43 -- accel/accel.sh@21 -- # val= 00:09:09.950 10:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.950 10:05:43 -- accel/accel.sh@20 -- # IFS=: 00:09:09.950 10:05:43 -- accel/accel.sh@20 -- # read -r var val 00:09:11.326 10:05:44 -- accel/accel.sh@21 -- # val= 00:09:11.326 10:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.326 10:05:44 -- accel/accel.sh@20 -- # IFS=: 00:09:11.326 10:05:44 -- accel/accel.sh@20 -- # read -r var val 00:09:11.326 10:05:44 -- accel/accel.sh@21 -- # val= 00:09:11.326 10:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.326 10:05:44 -- accel/accel.sh@20 -- # IFS=: 00:09:11.326 10:05:44 -- accel/accel.sh@20 -- # read -r var val 00:09:11.326 10:05:44 -- accel/accel.sh@21 -- # val= 00:09:11.326 10:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.326 10:05:44 -- accel/accel.sh@20 -- # IFS=: 00:09:11.326 10:05:44 -- accel/accel.sh@20 -- # read -r var val 00:09:11.326 10:05:44 -- accel/accel.sh@21 -- # val= 00:09:11.326 10:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.326 10:05:44 -- accel/accel.sh@20 -- # IFS=: 00:09:11.326 10:05:44 -- accel/accel.sh@20 -- # read -r var val 00:09:11.326 10:05:44 -- accel/accel.sh@21 -- # val= 00:09:11.326 10:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.326 10:05:44 -- accel/accel.sh@20 -- # IFS=: 00:09:11.326 10:05:44 -- accel/accel.sh@20 -- # read -r var val 00:09:11.326 10:05:44 -- accel/accel.sh@21 -- # val= 00:09:11.326 10:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.326 10:05:44 -- accel/accel.sh@20 -- # IFS=: 00:09:11.326 10:05:44 -- accel/accel.sh@20 -- # read -r var val 00:09:11.326 10:05:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:11.326 10:05:44 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:09:11.326 10:05:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:11.326 00:09:11.326 real 0m2.805s 00:09:11.326 user 0m2.538s 00:09:11.326 sys 0m0.273s 00:09:11.326 10:05:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.326 10:05:44 -- common/autotest_common.sh@10 -- # set +x 00:09:11.326 ************************************ 00:09:11.326 END TEST accel_dif_generate_copy 00:09:11.326 ************************************ 00:09:11.326 10:05:44 -- accel/accel.sh@107 -- # [[ y == y ]] 00:09:11.326 10:05:44 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:11.326 10:05:44 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:09:11.326 10:05:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:11.326 10:05:44 -- common/autotest_common.sh@10 -- # set +x 00:09:11.326 ************************************ 00:09:11.326 START TEST accel_comp 00:09:11.326 ************************************ 00:09:11.326 10:05:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:11.327 10:05:44 -- accel/accel.sh@16 -- # local accel_opc 00:09:11.327 10:05:44 -- accel/accel.sh@17 -- # local accel_module 00:09:11.327 10:05:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:11.327 10:05:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:11.327 10:05:44 -- accel/accel.sh@12 -- # build_accel_config 00:09:11.327 10:05:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:11.327 10:05:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:11.327 10:05:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:11.327 10:05:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:11.327 10:05:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:11.327 10:05:44 -- accel/accel.sh@41 -- # local IFS=, 00:09:11.327 10:05:44 -- accel/accel.sh@42 -- # jq -r . 00:09:11.327 [2024-04-17 10:05:44.426729] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:11.327 [2024-04-17 10:05:44.426801] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3289701 ] 00:09:11.327 EAL: No free 2048 kB hugepages reported on node 1 00:09:11.327 [2024-04-17 10:05:44.506996] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.327 [2024-04-17 10:05:44.590723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.703 10:05:45 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:12.703 00:09:12.703 SPDK Configuration: 00:09:12.703 Core mask: 0x1 00:09:12.703 00:09:12.703 Accel Perf Configuration: 00:09:12.703 Workload Type: compress 00:09:12.703 Transfer size: 4096 bytes 00:09:12.703 Vector count 1 00:09:12.703 Module: software 00:09:12.703 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:12.703 Queue depth: 32 00:09:12.704 Allocate depth: 32 00:09:12.704 # threads/core: 1 00:09:12.704 Run time: 1 seconds 00:09:12.704 Verify: No 00:09:12.704 00:09:12.704 Running for 1 seconds... 00:09:12.704 00:09:12.704 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:12.704 ------------------------------------------------------------------------------------ 00:09:12.704 0,0 40192/s 167 MiB/s 0 0 00:09:12.704 ==================================================================================== 00:09:12.704 Total 40192/s 157 MiB/s 0 0' 00:09:12.704 10:05:45 -- accel/accel.sh@20 -- # IFS=: 00:09:12.704 10:05:45 -- accel/accel.sh@20 -- # read -r var val 00:09:12.704 10:05:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:12.704 10:05:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:12.704 10:05:45 -- accel/accel.sh@12 -- # build_accel_config 00:09:12.704 10:05:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:12.704 10:05:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:12.704 10:05:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:12.704 10:05:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:12.704 10:05:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:12.704 10:05:45 -- accel/accel.sh@41 -- # local IFS=, 00:09:12.704 10:05:45 -- accel/accel.sh@42 -- # jq -r . 00:09:12.704 [2024-04-17 10:05:45.830035] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:12.704 [2024-04-17 10:05:45.830096] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3289974 ] 00:09:12.704 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.704 [2024-04-17 10:05:45.909960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.704 [2024-04-17 10:05:45.992903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.962 10:05:46 -- accel/accel.sh@21 -- # val= 00:09:12.962 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.962 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:09:12.962 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:09:12.962 10:05:46 -- accel/accel.sh@21 -- # val= 00:09:12.962 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.962 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:09:12.962 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:09:12.962 10:05:46 -- accel/accel.sh@21 -- # val= 00:09:12.962 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.962 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:09:12.962 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:09:12.962 10:05:46 -- accel/accel.sh@21 -- # val=0x1 00:09:12.962 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.962 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:09:12.962 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:09:12.962 10:05:46 -- accel/accel.sh@21 -- # val= 00:09:12.962 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.962 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:09:12.962 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:09:12.962 10:05:46 -- accel/accel.sh@21 -- # val= 00:09:12.962 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.962 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:09:12.962 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:09:12.962 10:05:46 -- accel/accel.sh@21 -- # val=compress 00:09:12.962 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.962 10:05:46 -- accel/accel.sh@24 -- # accel_opc=compress 00:09:12.963 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:09:12.963 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:09:12.963 10:05:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:12.963 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.963 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:09:12.963 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:09:12.963 10:05:46 -- accel/accel.sh@21 -- # val= 00:09:12.963 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.963 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:09:12.963 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:09:12.963 10:05:46 -- accel/accel.sh@21 -- # val=software 00:09:12.963 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.963 10:05:46 -- accel/accel.sh@23 -- # accel_module=software 00:09:12.963 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:09:12.963 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:09:12.963 10:05:46 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:12.963 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.963 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:09:12.963 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:09:12.963 10:05:46 -- accel/accel.sh@21 -- # val=32 00:09:12.963 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.963 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:09:12.963 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:09:12.963 10:05:46 -- accel/accel.sh@21 -- # val=32 00:09:12.963 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.963 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:09:12.963 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:09:12.963 10:05:46 -- accel/accel.sh@21 -- # val=1 00:09:12.963 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.963 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:09:12.963 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:09:12.963 10:05:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:12.963 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.963 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:09:12.963 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:09:12.963 10:05:46 -- accel/accel.sh@21 -- # val=No 00:09:12.963 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.963 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:09:12.963 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:09:12.963 10:05:46 -- accel/accel.sh@21 -- # val= 00:09:12.963 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.963 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:09:12.963 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:09:12.963 10:05:46 -- accel/accel.sh@21 -- # val= 00:09:12.963 10:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:12.963 10:05:46 -- accel/accel.sh@20 -- # IFS=: 00:09:12.963 10:05:46 -- accel/accel.sh@20 -- # read -r var val 00:09:13.898 10:05:47 -- accel/accel.sh@21 -- # val= 00:09:13.898 10:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.898 10:05:47 -- accel/accel.sh@20 -- # IFS=: 00:09:13.898 10:05:47 -- accel/accel.sh@20 -- # read -r var val 00:09:13.898 10:05:47 -- accel/accel.sh@21 -- # val= 00:09:13.898 10:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.898 10:05:47 -- accel/accel.sh@20 -- # IFS=: 00:09:13.898 10:05:47 -- accel/accel.sh@20 -- # read -r var val 00:09:13.898 10:05:47 -- accel/accel.sh@21 -- # val= 00:09:13.898 10:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.898 10:05:47 -- accel/accel.sh@20 -- # IFS=: 00:09:13.898 10:05:47 -- accel/accel.sh@20 -- # read -r var val 00:09:13.898 10:05:47 -- accel/accel.sh@21 -- # val= 00:09:13.898 10:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.898 10:05:47 -- accel/accel.sh@20 -- # IFS=: 00:09:13.898 10:05:47 -- accel/accel.sh@20 -- # read -r var val 00:09:13.898 10:05:47 -- accel/accel.sh@21 -- # val= 00:09:13.898 10:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.898 10:05:47 -- accel/accel.sh@20 -- # IFS=: 00:09:13.898 10:05:47 -- accel/accel.sh@20 -- # read -r var val 00:09:13.898 10:05:47 -- accel/accel.sh@21 -- # val= 00:09:13.898 10:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.898 10:05:47 -- accel/accel.sh@20 -- # IFS=: 00:09:13.898 10:05:47 -- accel/accel.sh@20 -- # read -r var val 00:09:13.898 10:05:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:13.898 10:05:47 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:09:13.898 10:05:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:13.898 00:09:13.898 real 0m2.813s 00:09:13.898 user 0m2.558s 00:09:13.898 sys 0m0.259s 00:09:13.898 10:05:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.898 10:05:47 -- common/autotest_common.sh@10 -- # set +x 00:09:13.898 ************************************ 00:09:13.898 END TEST accel_comp 00:09:13.898 ************************************ 00:09:14.158 10:05:47 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:14.158 10:05:47 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:09:14.158 10:05:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:14.158 10:05:47 -- common/autotest_common.sh@10 -- # set +x 00:09:14.158 ************************************ 00:09:14.158 START TEST accel_decomp 00:09:14.158 ************************************ 00:09:14.158 10:05:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:14.158 10:05:47 -- accel/accel.sh@16 -- # local accel_opc 00:09:14.158 10:05:47 -- accel/accel.sh@17 -- # local accel_module 00:09:14.158 10:05:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:14.158 10:05:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:14.158 10:05:47 -- accel/accel.sh@12 -- # build_accel_config 00:09:14.158 10:05:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:14.158 10:05:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:14.158 10:05:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:14.158 10:05:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:14.158 10:05:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:14.158 10:05:47 -- accel/accel.sh@41 -- # local IFS=, 00:09:14.158 10:05:47 -- accel/accel.sh@42 -- # jq -r . 00:09:14.158 [2024-04-17 10:05:47.277376] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:14.158 [2024-04-17 10:05:47.277439] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290255 ] 00:09:14.158 EAL: No free 2048 kB hugepages reported on node 1 00:09:14.158 [2024-04-17 10:05:47.359240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.158 [2024-04-17 10:05:47.440630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.535 10:05:48 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:15.535 00:09:15.535 SPDK Configuration: 00:09:15.535 Core mask: 0x1 00:09:15.535 00:09:15.535 Accel Perf Configuration: 00:09:15.535 Workload Type: decompress 00:09:15.535 Transfer size: 4096 bytes 00:09:15.535 Vector count 1 00:09:15.535 Module: software 00:09:15.535 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:15.535 Queue depth: 32 00:09:15.535 Allocate depth: 32 00:09:15.535 # threads/core: 1 00:09:15.535 Run time: 1 seconds 00:09:15.535 Verify: Yes 00:09:15.535 00:09:15.535 Running for 1 seconds... 00:09:15.535 00:09:15.535 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:15.535 ------------------------------------------------------------------------------------ 00:09:15.535 0,0 46656/s 85 MiB/s 0 0 00:09:15.535 ==================================================================================== 00:09:15.535 Total 46656/s 182 MiB/s 0 0' 00:09:15.535 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:09:15.535 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:09:15.535 10:05:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:15.535 10:05:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:15.535 10:05:48 -- accel/accel.sh@12 -- # build_accel_config 00:09:15.535 10:05:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:15.535 10:05:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:15.535 10:05:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:15.535 10:05:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:15.535 10:05:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:15.535 10:05:48 -- accel/accel.sh@41 -- # local IFS=, 00:09:15.535 10:05:48 -- accel/accel.sh@42 -- # jq -r . 00:09:15.536 [2024-04-17 10:05:48.683479] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:15.536 [2024-04-17 10:05:48.683555] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290525 ] 00:09:15.536 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.536 [2024-04-17 10:05:48.765525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.536 [2024-04-17 10:05:48.848579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.795 10:05:48 -- accel/accel.sh@21 -- # val= 00:09:15.795 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:09:15.795 10:05:48 -- accel/accel.sh@21 -- # val= 00:09:15.795 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:09:15.795 10:05:48 -- accel/accel.sh@21 -- # val= 00:09:15.795 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:09:15.795 10:05:48 -- accel/accel.sh@21 -- # val=0x1 00:09:15.795 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:09:15.795 10:05:48 -- accel/accel.sh@21 -- # val= 00:09:15.795 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:09:15.795 10:05:48 -- accel/accel.sh@21 -- # val= 00:09:15.795 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:09:15.795 10:05:48 -- accel/accel.sh@21 -- # val=decompress 00:09:15.795 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.795 10:05:48 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:09:15.795 10:05:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:15.795 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:09:15.795 10:05:48 -- accel/accel.sh@21 -- # val= 00:09:15.795 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:09:15.795 10:05:48 -- accel/accel.sh@21 -- # val=software 00:09:15.795 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.795 10:05:48 -- accel/accel.sh@23 -- # accel_module=software 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:09:15.795 10:05:48 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:15.795 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:09:15.795 10:05:48 -- accel/accel.sh@21 -- # val=32 00:09:15.795 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:09:15.795 10:05:48 -- accel/accel.sh@21 -- # val=32 00:09:15.795 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:09:15.795 10:05:48 -- accel/accel.sh@21 -- # val=1 00:09:15.795 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:09:15.795 10:05:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:15.795 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:09:15.795 10:05:48 -- accel/accel.sh@21 -- # val=Yes 00:09:15.795 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:09:15.795 10:05:48 -- accel/accel.sh@21 -- # val= 00:09:15.795 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:09:15.795 10:05:48 -- accel/accel.sh@21 -- # val= 00:09:15.795 10:05:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # IFS=: 00:09:15.795 10:05:48 -- accel/accel.sh@20 -- # read -r var val 00:09:16.741 10:05:50 -- accel/accel.sh@21 -- # val= 00:09:16.741 10:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:09:16.741 10:05:50 -- accel/accel.sh@20 -- # IFS=: 00:09:16.741 10:05:50 -- accel/accel.sh@20 -- # read -r var val 00:09:16.741 10:05:50 -- accel/accel.sh@21 -- # val= 00:09:16.741 10:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:09:16.741 10:05:50 -- accel/accel.sh@20 -- # IFS=: 00:09:16.741 10:05:50 -- accel/accel.sh@20 -- # read -r var val 00:09:16.741 10:05:50 -- accel/accel.sh@21 -- # val= 00:09:16.741 10:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:09:16.741 10:05:50 -- accel/accel.sh@20 -- # IFS=: 00:09:16.741 10:05:50 -- accel/accel.sh@20 -- # read -r var val 00:09:16.741 10:05:50 -- accel/accel.sh@21 -- # val= 00:09:16.741 10:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:09:16.741 10:05:50 -- accel/accel.sh@20 -- # IFS=: 00:09:16.741 10:05:50 -- accel/accel.sh@20 -- # read -r var val 00:09:16.741 10:05:50 -- accel/accel.sh@21 -- # val= 00:09:16.741 10:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:09:16.741 10:05:50 -- accel/accel.sh@20 -- # IFS=: 00:09:16.741 10:05:50 -- accel/accel.sh@20 -- # read -r var val 00:09:16.741 10:05:50 -- accel/accel.sh@21 -- # val= 00:09:16.741 10:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:09:16.741 10:05:50 -- accel/accel.sh@20 -- # IFS=: 00:09:16.741 10:05:50 -- accel/accel.sh@20 -- # read -r var val 00:09:16.741 10:05:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:16.741 10:05:50 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:16.741 10:05:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:16.741 00:09:16.741 real 0m2.821s 00:09:16.741 user 0m2.552s 00:09:16.741 sys 0m0.274s 00:09:16.741 10:05:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:16.741 10:05:50 -- common/autotest_common.sh@10 -- # set +x 00:09:16.741 ************************************ 00:09:16.741 END TEST accel_decomp 00:09:16.741 ************************************ 00:09:16.999 10:05:50 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:09:16.999 10:05:50 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:09:16.999 10:05:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:16.999 10:05:50 -- common/autotest_common.sh@10 -- # set +x 00:09:16.999 ************************************ 00:09:16.999 START TEST accel_decmop_full 00:09:16.999 ************************************ 00:09:16.999 10:05:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:09:16.999 10:05:50 -- accel/accel.sh@16 -- # local accel_opc 00:09:16.999 10:05:50 -- accel/accel.sh@17 -- # local accel_module 00:09:16.999 10:05:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:09:16.999 10:05:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:09:16.999 10:05:50 -- accel/accel.sh@12 -- # build_accel_config 00:09:16.999 10:05:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:16.999 10:05:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:16.999 10:05:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:16.999 10:05:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:16.999 10:05:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:16.999 10:05:50 -- accel/accel.sh@41 -- # local IFS=, 00:09:16.999 10:05:50 -- accel/accel.sh@42 -- # jq -r . 00:09:16.999 [2024-04-17 10:05:50.137220] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:16.999 [2024-04-17 10:05:50.137284] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290809 ] 00:09:16.999 EAL: No free 2048 kB hugepages reported on node 1 00:09:16.999 [2024-04-17 10:05:50.218733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.999 [2024-04-17 10:05:50.301873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.376 10:05:51 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:18.376 00:09:18.376 SPDK Configuration: 00:09:18.376 Core mask: 0x1 00:09:18.376 00:09:18.376 Accel Perf Configuration: 00:09:18.376 Workload Type: decompress 00:09:18.376 Transfer size: 111250 bytes 00:09:18.376 Vector count 1 00:09:18.376 Module: software 00:09:18.376 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:18.376 Queue depth: 32 00:09:18.376 Allocate depth: 32 00:09:18.376 # threads/core: 1 00:09:18.376 Run time: 1 seconds 00:09:18.376 Verify: Yes 00:09:18.376 00:09:18.376 Running for 1 seconds... 00:09:18.376 00:09:18.376 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:18.376 ------------------------------------------------------------------------------------ 00:09:18.376 0,0 3136/s 129 MiB/s 0 0 00:09:18.376 ==================================================================================== 00:09:18.376 Total 3136/s 332 MiB/s 0 0' 00:09:18.376 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:09:18.376 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:09:18.376 10:05:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:09:18.376 10:05:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:09:18.376 10:05:51 -- accel/accel.sh@12 -- # build_accel_config 00:09:18.376 10:05:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:18.376 10:05:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:18.376 10:05:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:18.376 10:05:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:18.376 10:05:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:18.376 10:05:51 -- accel/accel.sh@41 -- # local IFS=, 00:09:18.376 10:05:51 -- accel/accel.sh@42 -- # jq -r . 00:09:18.376 [2024-04-17 10:05:51.555784] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:18.376 [2024-04-17 10:05:51.555859] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291073 ] 00:09:18.376 EAL: No free 2048 kB hugepages reported on node 1 00:09:18.376 [2024-04-17 10:05:51.637158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.635 [2024-04-17 10:05:51.720856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.635 10:05:51 -- accel/accel.sh@21 -- # val= 00:09:18.635 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:09:18.635 10:05:51 -- accel/accel.sh@21 -- # val= 00:09:18.635 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:09:18.635 10:05:51 -- accel/accel.sh@21 -- # val= 00:09:18.635 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:09:18.635 10:05:51 -- accel/accel.sh@21 -- # val=0x1 00:09:18.635 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:09:18.635 10:05:51 -- accel/accel.sh@21 -- # val= 00:09:18.635 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:09:18.635 10:05:51 -- accel/accel.sh@21 -- # val= 00:09:18.635 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:09:18.635 10:05:51 -- accel/accel.sh@21 -- # val=decompress 00:09:18.635 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.635 10:05:51 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:09:18.635 10:05:51 -- accel/accel.sh@21 -- # val='111250 bytes' 00:09:18.635 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:09:18.635 10:05:51 -- accel/accel.sh@21 -- # val= 00:09:18.635 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:09:18.635 10:05:51 -- accel/accel.sh@21 -- # val=software 00:09:18.635 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.635 10:05:51 -- accel/accel.sh@23 -- # accel_module=software 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:09:18.635 10:05:51 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:18.635 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:09:18.635 10:05:51 -- accel/accel.sh@21 -- # val=32 00:09:18.635 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:09:18.635 10:05:51 -- accel/accel.sh@21 -- # val=32 00:09:18.635 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:09:18.635 10:05:51 -- accel/accel.sh@21 -- # val=1 00:09:18.635 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:09:18.635 10:05:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:18.635 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:09:18.635 10:05:51 -- accel/accel.sh@21 -- # val=Yes 00:09:18.635 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:09:18.635 10:05:51 -- accel/accel.sh@21 -- # val= 00:09:18.635 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:09:18.635 10:05:51 -- accel/accel.sh@21 -- # val= 00:09:18.635 10:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # IFS=: 00:09:18.635 10:05:51 -- accel/accel.sh@20 -- # read -r var val 00:09:20.023 10:05:52 -- accel/accel.sh@21 -- # val= 00:09:20.023 10:05:52 -- accel/accel.sh@22 -- # case "$var" in 00:09:20.023 10:05:52 -- accel/accel.sh@20 -- # IFS=: 00:09:20.023 10:05:52 -- accel/accel.sh@20 -- # read -r var val 00:09:20.023 10:05:52 -- accel/accel.sh@21 -- # val= 00:09:20.023 10:05:52 -- accel/accel.sh@22 -- # case "$var" in 00:09:20.023 10:05:52 -- accel/accel.sh@20 -- # IFS=: 00:09:20.023 10:05:52 -- accel/accel.sh@20 -- # read -r var val 00:09:20.023 10:05:52 -- accel/accel.sh@21 -- # val= 00:09:20.023 10:05:52 -- accel/accel.sh@22 -- # case "$var" in 00:09:20.023 10:05:52 -- accel/accel.sh@20 -- # IFS=: 00:09:20.023 10:05:52 -- accel/accel.sh@20 -- # read -r var val 00:09:20.023 10:05:52 -- accel/accel.sh@21 -- # val= 00:09:20.023 10:05:52 -- accel/accel.sh@22 -- # case "$var" in 00:09:20.023 10:05:52 -- accel/accel.sh@20 -- # IFS=: 00:09:20.023 10:05:52 -- accel/accel.sh@20 -- # read -r var val 00:09:20.023 10:05:52 -- accel/accel.sh@21 -- # val= 00:09:20.023 10:05:52 -- accel/accel.sh@22 -- # case "$var" in 00:09:20.023 10:05:52 -- accel/accel.sh@20 -- # IFS=: 00:09:20.023 10:05:52 -- accel/accel.sh@20 -- # read -r var val 00:09:20.023 10:05:52 -- accel/accel.sh@21 -- # val= 00:09:20.023 10:05:52 -- accel/accel.sh@22 -- # case "$var" in 00:09:20.023 10:05:52 -- accel/accel.sh@20 -- # IFS=: 00:09:20.023 10:05:52 -- accel/accel.sh@20 -- # read -r var val 00:09:20.023 10:05:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:20.023 10:05:52 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:20.023 10:05:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:20.023 00:09:20.023 real 0m2.840s 00:09:20.023 user 0m2.562s 00:09:20.023 sys 0m0.283s 00:09:20.023 10:05:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.023 10:05:52 -- common/autotest_common.sh@10 -- # set +x 00:09:20.023 ************************************ 00:09:20.023 END TEST accel_decmop_full 00:09:20.023 ************************************ 00:09:20.023 10:05:52 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:20.023 10:05:52 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:09:20.023 10:05:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:20.023 10:05:52 -- common/autotest_common.sh@10 -- # set +x 00:09:20.023 ************************************ 00:09:20.023 START TEST accel_decomp_mcore 00:09:20.023 ************************************ 00:09:20.023 10:05:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:20.023 10:05:52 -- accel/accel.sh@16 -- # local accel_opc 00:09:20.023 10:05:52 -- accel/accel.sh@17 -- # local accel_module 00:09:20.023 10:05:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:20.023 10:05:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:20.023 10:05:52 -- accel/accel.sh@12 -- # build_accel_config 00:09:20.023 10:05:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:20.023 10:05:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:20.023 10:05:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:20.023 10:05:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:20.023 10:05:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:20.023 10:05:52 -- accel/accel.sh@41 -- # local IFS=, 00:09:20.023 10:05:52 -- accel/accel.sh@42 -- # jq -r . 00:09:20.023 [2024-04-17 10:05:53.016546] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:20.023 [2024-04-17 10:05:53.016621] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291360 ] 00:09:20.023 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.023 [2024-04-17 10:05:53.098382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:20.023 [2024-04-17 10:05:53.185011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.023 [2024-04-17 10:05:53.185113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.023 [2024-04-17 10:05:53.185251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.023 [2024-04-17 10:05:53.185251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:21.401 10:05:54 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:21.401 00:09:21.401 SPDK Configuration: 00:09:21.401 Core mask: 0xf 00:09:21.401 00:09:21.401 Accel Perf Configuration: 00:09:21.401 Workload Type: decompress 00:09:21.401 Transfer size: 4096 bytes 00:09:21.401 Vector count 1 00:09:21.401 Module: software 00:09:21.401 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:21.401 Queue depth: 32 00:09:21.401 Allocate depth: 32 00:09:21.401 # threads/core: 1 00:09:21.401 Run time: 1 seconds 00:09:21.401 Verify: Yes 00:09:21.401 00:09:21.401 Running for 1 seconds... 00:09:21.401 00:09:21.401 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:21.401 ------------------------------------------------------------------------------------ 00:09:21.401 0,0 42400/s 78 MiB/s 0 0 00:09:21.401 3,0 42656/s 78 MiB/s 0 0 00:09:21.401 2,0 67648/s 124 MiB/s 0 0 00:09:21.401 1,0 42624/s 78 MiB/s 0 0 00:09:21.401 ==================================================================================== 00:09:21.401 Total 195328/s 763 MiB/s 0 0' 00:09:21.401 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:09:21.401 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:09:21.401 10:05:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:21.401 10:05:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:21.401 10:05:54 -- accel/accel.sh@12 -- # build_accel_config 00:09:21.401 10:05:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:21.401 10:05:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:21.401 10:05:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:21.401 10:05:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:21.401 10:05:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:21.401 10:05:54 -- accel/accel.sh@41 -- # local IFS=, 00:09:21.401 10:05:54 -- accel/accel.sh@42 -- # jq -r . 00:09:21.401 [2024-04-17 10:05:54.433869] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:21.401 [2024-04-17 10:05:54.433943] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291630 ] 00:09:21.401 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.401 [2024-04-17 10:05:54.515359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:21.401 [2024-04-17 10:05:54.601443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.402 [2024-04-17 10:05:54.601532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.402 [2024-04-17 10:05:54.601668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:21.402 [2024-04-17 10:05:54.601668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.402 10:05:54 -- accel/accel.sh@21 -- # val= 00:09:21.402 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:09:21.402 10:05:54 -- accel/accel.sh@21 -- # val= 00:09:21.402 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:09:21.402 10:05:54 -- accel/accel.sh@21 -- # val= 00:09:21.402 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:09:21.402 10:05:54 -- accel/accel.sh@21 -- # val=0xf 00:09:21.402 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:09:21.402 10:05:54 -- accel/accel.sh@21 -- # val= 00:09:21.402 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:09:21.402 10:05:54 -- accel/accel.sh@21 -- # val= 00:09:21.402 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:09:21.402 10:05:54 -- accel/accel.sh@21 -- # val=decompress 00:09:21.402 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.402 10:05:54 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:09:21.402 10:05:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:21.402 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:09:21.402 10:05:54 -- accel/accel.sh@21 -- # val= 00:09:21.402 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:09:21.402 10:05:54 -- accel/accel.sh@21 -- # val=software 00:09:21.402 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.402 10:05:54 -- accel/accel.sh@23 -- # accel_module=software 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:09:21.402 10:05:54 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:21.402 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:09:21.402 10:05:54 -- accel/accel.sh@21 -- # val=32 00:09:21.402 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:09:21.402 10:05:54 -- accel/accel.sh@21 -- # val=32 00:09:21.402 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:09:21.402 10:05:54 -- accel/accel.sh@21 -- # val=1 00:09:21.402 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:09:21.402 10:05:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:21.402 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:09:21.402 10:05:54 -- accel/accel.sh@21 -- # val=Yes 00:09:21.402 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:09:21.402 10:05:54 -- accel/accel.sh@21 -- # val= 00:09:21.402 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:09:21.402 10:05:54 -- accel/accel.sh@21 -- # val= 00:09:21.402 10:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # IFS=: 00:09:21.402 10:05:54 -- accel/accel.sh@20 -- # read -r var val 00:09:22.777 10:05:55 -- accel/accel.sh@21 -- # val= 00:09:22.777 10:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.777 10:05:55 -- accel/accel.sh@20 -- # IFS=: 00:09:22.777 10:05:55 -- accel/accel.sh@20 -- # read -r var val 00:09:22.777 10:05:55 -- accel/accel.sh@21 -- # val= 00:09:22.777 10:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.777 10:05:55 -- accel/accel.sh@20 -- # IFS=: 00:09:22.777 10:05:55 -- accel/accel.sh@20 -- # read -r var val 00:09:22.777 10:05:55 -- accel/accel.sh@21 -- # val= 00:09:22.777 10:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.777 10:05:55 -- accel/accel.sh@20 -- # IFS=: 00:09:22.777 10:05:55 -- accel/accel.sh@20 -- # read -r var val 00:09:22.777 10:05:55 -- accel/accel.sh@21 -- # val= 00:09:22.777 10:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.777 10:05:55 -- accel/accel.sh@20 -- # IFS=: 00:09:22.777 10:05:55 -- accel/accel.sh@20 -- # read -r var val 00:09:22.777 10:05:55 -- accel/accel.sh@21 -- # val= 00:09:22.777 10:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.777 10:05:55 -- accel/accel.sh@20 -- # IFS=: 00:09:22.777 10:05:55 -- accel/accel.sh@20 -- # read -r var val 00:09:22.777 10:05:55 -- accel/accel.sh@21 -- # val= 00:09:22.777 10:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.777 10:05:55 -- accel/accel.sh@20 -- # IFS=: 00:09:22.777 10:05:55 -- accel/accel.sh@20 -- # read -r var val 00:09:22.777 10:05:55 -- accel/accel.sh@21 -- # val= 00:09:22.777 10:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.777 10:05:55 -- accel/accel.sh@20 -- # IFS=: 00:09:22.777 10:05:55 -- accel/accel.sh@20 -- # read -r var val 00:09:22.777 10:05:55 -- accel/accel.sh@21 -- # val= 00:09:22.777 10:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.777 10:05:55 -- accel/accel.sh@20 -- # IFS=: 00:09:22.777 10:05:55 -- accel/accel.sh@20 -- # read -r var val 00:09:22.777 10:05:55 -- accel/accel.sh@21 -- # val= 00:09:22.777 10:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.777 10:05:55 -- accel/accel.sh@20 -- # IFS=: 00:09:22.777 10:05:55 -- accel/accel.sh@20 -- # read -r var val 00:09:22.777 10:05:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:22.777 10:05:55 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:22.777 10:05:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:22.777 00:09:22.777 real 0m2.842s 00:09:22.777 user 0m9.266s 00:09:22.777 sys 0m0.288s 00:09:22.777 10:05:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:22.777 10:05:55 -- common/autotest_common.sh@10 -- # set +x 00:09:22.777 ************************************ 00:09:22.777 END TEST accel_decomp_mcore 00:09:22.777 ************************************ 00:09:22.777 10:05:55 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:22.777 10:05:55 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:22.777 10:05:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:22.777 10:05:55 -- common/autotest_common.sh@10 -- # set +x 00:09:22.777 ************************************ 00:09:22.777 START TEST accel_decomp_full_mcore 00:09:22.777 ************************************ 00:09:22.777 10:05:55 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:22.777 10:05:55 -- accel/accel.sh@16 -- # local accel_opc 00:09:22.777 10:05:55 -- accel/accel.sh@17 -- # local accel_module 00:09:22.777 10:05:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:22.777 10:05:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:22.777 10:05:55 -- accel/accel.sh@12 -- # build_accel_config 00:09:22.777 10:05:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:22.777 10:05:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:22.777 10:05:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:22.777 10:05:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:22.777 10:05:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:22.777 10:05:55 -- accel/accel.sh@41 -- # local IFS=, 00:09:22.777 10:05:55 -- accel/accel.sh@42 -- # jq -r . 00:09:22.777 [2024-04-17 10:05:55.895091] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:22.777 [2024-04-17 10:05:55.895151] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291920 ] 00:09:22.777 EAL: No free 2048 kB hugepages reported on node 1 00:09:22.777 [2024-04-17 10:05:55.976575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:22.777 [2024-04-17 10:05:56.063178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.777 [2024-04-17 10:05:56.063280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.777 [2024-04-17 10:05:56.063426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.777 [2024-04-17 10:05:56.063427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.153 10:05:57 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:24.153 00:09:24.153 SPDK Configuration: 00:09:24.153 Core mask: 0xf 00:09:24.153 00:09:24.153 Accel Perf Configuration: 00:09:24.153 Workload Type: decompress 00:09:24.153 Transfer size: 111250 bytes 00:09:24.153 Vector count 1 00:09:24.153 Module: software 00:09:24.153 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:24.153 Queue depth: 32 00:09:24.153 Allocate depth: 32 00:09:24.153 # threads/core: 1 00:09:24.153 Run time: 1 seconds 00:09:24.153 Verify: Yes 00:09:24.153 00:09:24.153 Running for 1 seconds... 00:09:24.153 00:09:24.153 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:24.153 ------------------------------------------------------------------------------------ 00:09:24.153 0,0 3136/s 129 MiB/s 0 0 00:09:24.153 3,0 3136/s 129 MiB/s 0 0 00:09:24.153 2,0 5216/s 215 MiB/s 0 0 00:09:24.153 1,0 3136/s 129 MiB/s 0 0 00:09:24.153 ==================================================================================== 00:09:24.153 Total 14624/s 1551 MiB/s 0 0' 00:09:24.153 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:09:24.153 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:09:24.153 10:05:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:24.153 10:05:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:24.153 10:05:57 -- accel/accel.sh@12 -- # build_accel_config 00:09:24.153 10:05:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:24.153 10:05:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:24.153 10:05:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:24.153 10:05:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:24.153 10:05:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:24.153 10:05:57 -- accel/accel.sh@41 -- # local IFS=, 00:09:24.153 10:05:57 -- accel/accel.sh@42 -- # jq -r . 00:09:24.153 [2024-04-17 10:05:57.327279] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:24.153 [2024-04-17 10:05:57.327346] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292187 ] 00:09:24.153 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.153 [2024-04-17 10:05:57.406442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:24.412 [2024-04-17 10:05:57.492780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.412 [2024-04-17 10:05:57.492881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.412 [2024-04-17 10:05:57.493010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.412 [2024-04-17 10:05:57.493011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.412 10:05:57 -- accel/accel.sh@21 -- # val= 00:09:24.412 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.412 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:09:24.412 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:09:24.412 10:05:57 -- accel/accel.sh@21 -- # val= 00:09:24.412 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.412 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:09:24.412 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:09:24.412 10:05:57 -- accel/accel.sh@21 -- # val= 00:09:24.412 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.412 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:09:24.412 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:09:24.412 10:05:57 -- accel/accel.sh@21 -- # val=0xf 00:09:24.412 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.412 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:09:24.412 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:09:24.412 10:05:57 -- accel/accel.sh@21 -- # val= 00:09:24.412 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.412 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:09:24.412 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:09:24.412 10:05:57 -- accel/accel.sh@21 -- # val= 00:09:24.412 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.412 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:09:24.412 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:09:24.412 10:05:57 -- accel/accel.sh@21 -- # val=decompress 00:09:24.412 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.412 10:05:57 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:24.412 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:09:24.412 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:09:24.412 10:05:57 -- accel/accel.sh@21 -- # val='111250 bytes' 00:09:24.412 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.412 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:09:24.412 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:09:24.412 10:05:57 -- accel/accel.sh@21 -- # val= 00:09:24.412 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.412 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:09:24.412 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:09:24.412 10:05:57 -- accel/accel.sh@21 -- # val=software 00:09:24.412 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.412 10:05:57 -- accel/accel.sh@23 -- # accel_module=software 00:09:24.412 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:09:24.412 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:09:24.412 10:05:57 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:24.412 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.412 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:09:24.412 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:09:24.412 10:05:57 -- accel/accel.sh@21 -- # val=32 00:09:24.412 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.413 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:09:24.413 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:09:24.413 10:05:57 -- accel/accel.sh@21 -- # val=32 00:09:24.413 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.413 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:09:24.413 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:09:24.413 10:05:57 -- accel/accel.sh@21 -- # val=1 00:09:24.413 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.413 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:09:24.413 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:09:24.413 10:05:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:24.413 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.413 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:09:24.413 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:09:24.413 10:05:57 -- accel/accel.sh@21 -- # val=Yes 00:09:24.413 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.413 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:09:24.413 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:09:24.413 10:05:57 -- accel/accel.sh@21 -- # val= 00:09:24.413 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.413 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:09:24.413 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:09:24.413 10:05:57 -- accel/accel.sh@21 -- # val= 00:09:24.413 10:05:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:24.413 10:05:57 -- accel/accel.sh@20 -- # IFS=: 00:09:24.413 10:05:57 -- accel/accel.sh@20 -- # read -r var val 00:09:25.789 10:05:58 -- accel/accel.sh@21 -- # val= 00:09:25.789 10:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.789 10:05:58 -- accel/accel.sh@20 -- # IFS=: 00:09:25.789 10:05:58 -- accel/accel.sh@20 -- # read -r var val 00:09:25.789 10:05:58 -- accel/accel.sh@21 -- # val= 00:09:25.789 10:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.789 10:05:58 -- accel/accel.sh@20 -- # IFS=: 00:09:25.789 10:05:58 -- accel/accel.sh@20 -- # read -r var val 00:09:25.789 10:05:58 -- accel/accel.sh@21 -- # val= 00:09:25.789 10:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.789 10:05:58 -- accel/accel.sh@20 -- # IFS=: 00:09:25.789 10:05:58 -- accel/accel.sh@20 -- # read -r var val 00:09:25.789 10:05:58 -- accel/accel.sh@21 -- # val= 00:09:25.789 10:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.789 10:05:58 -- accel/accel.sh@20 -- # IFS=: 00:09:25.789 10:05:58 -- accel/accel.sh@20 -- # read -r var val 00:09:25.789 10:05:58 -- accel/accel.sh@21 -- # val= 00:09:25.789 10:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.789 10:05:58 -- accel/accel.sh@20 -- # IFS=: 00:09:25.789 10:05:58 -- accel/accel.sh@20 -- # read -r var val 00:09:25.789 10:05:58 -- accel/accel.sh@21 -- # val= 00:09:25.789 10:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.789 10:05:58 -- accel/accel.sh@20 -- # IFS=: 00:09:25.789 10:05:58 -- accel/accel.sh@20 -- # read -r var val 00:09:25.789 10:05:58 -- accel/accel.sh@21 -- # val= 00:09:25.789 10:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.789 10:05:58 -- accel/accel.sh@20 -- # IFS=: 00:09:25.789 10:05:58 -- accel/accel.sh@20 -- # read -r var val 00:09:25.789 10:05:58 -- accel/accel.sh@21 -- # val= 00:09:25.789 10:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.789 10:05:58 -- accel/accel.sh@20 -- # IFS=: 00:09:25.789 10:05:58 -- accel/accel.sh@20 -- # read -r var val 00:09:25.789 10:05:58 -- accel/accel.sh@21 -- # val= 00:09:25.789 10:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.789 10:05:58 -- accel/accel.sh@20 -- # IFS=: 00:09:25.789 10:05:58 -- accel/accel.sh@20 -- # read -r var val 00:09:25.789 10:05:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:25.789 10:05:58 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:25.789 10:05:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:25.789 00:09:25.789 real 0m2.869s 00:09:25.789 user 0m9.385s 00:09:25.789 sys 0m0.288s 00:09:25.789 10:05:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.789 10:05:58 -- common/autotest_common.sh@10 -- # set +x 00:09:25.789 ************************************ 00:09:25.789 END TEST accel_decomp_full_mcore 00:09:25.789 ************************************ 00:09:25.789 10:05:58 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:25.789 10:05:58 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:09:25.789 10:05:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:25.789 10:05:58 -- common/autotest_common.sh@10 -- # set +x 00:09:25.789 ************************************ 00:09:25.789 START TEST accel_decomp_mthread 00:09:25.789 ************************************ 00:09:25.789 10:05:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:25.789 10:05:58 -- accel/accel.sh@16 -- # local accel_opc 00:09:25.789 10:05:58 -- accel/accel.sh@17 -- # local accel_module 00:09:25.789 10:05:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:25.789 10:05:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:25.789 10:05:58 -- accel/accel.sh@12 -- # build_accel_config 00:09:25.789 10:05:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:25.789 10:05:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:25.789 10:05:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:25.789 10:05:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:25.789 10:05:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:25.789 10:05:58 -- accel/accel.sh@41 -- # local IFS=, 00:09:25.789 10:05:58 -- accel/accel.sh@42 -- # jq -r . 00:09:25.789 [2024-04-17 10:05:58.802687] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:25.789 [2024-04-17 10:05:58.802747] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292480 ] 00:09:25.789 EAL: No free 2048 kB hugepages reported on node 1 00:09:25.789 [2024-04-17 10:05:58.883094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.789 [2024-04-17 10:05:58.967178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.166 10:06:00 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:27.166 00:09:27.166 SPDK Configuration: 00:09:27.166 Core mask: 0x1 00:09:27.166 00:09:27.166 Accel Perf Configuration: 00:09:27.166 Workload Type: decompress 00:09:27.166 Transfer size: 4096 bytes 00:09:27.166 Vector count 1 00:09:27.166 Module: software 00:09:27.166 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:27.166 Queue depth: 32 00:09:27.166 Allocate depth: 32 00:09:27.166 # threads/core: 2 00:09:27.166 Run time: 1 seconds 00:09:27.166 Verify: Yes 00:09:27.166 00:09:27.166 Running for 1 seconds... 00:09:27.166 00:09:27.166 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:27.166 ------------------------------------------------------------------------------------ 00:09:27.166 0,1 23680/s 43 MiB/s 0 0 00:09:27.166 0,0 23552/s 43 MiB/s 0 0 00:09:27.166 ==================================================================================== 00:09:27.166 Total 47232/s 184 MiB/s 0 0' 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:09:27.166 10:06:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:27.166 10:06:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:27.166 10:06:00 -- accel/accel.sh@12 -- # build_accel_config 00:09:27.166 10:06:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:27.166 10:06:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:27.166 10:06:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:27.166 10:06:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:27.166 10:06:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:27.166 10:06:00 -- accel/accel.sh@41 -- # local IFS=, 00:09:27.166 10:06:00 -- accel/accel.sh@42 -- # jq -r . 00:09:27.166 [2024-04-17 10:06:00.214796] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:27.166 [2024-04-17 10:06:00.214865] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292760 ] 00:09:27.166 EAL: No free 2048 kB hugepages reported on node 1 00:09:27.166 [2024-04-17 10:06:00.294420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.166 [2024-04-17 10:06:00.378142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.166 10:06:00 -- accel/accel.sh@21 -- # val= 00:09:27.166 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:09:27.166 10:06:00 -- accel/accel.sh@21 -- # val= 00:09:27.166 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:09:27.166 10:06:00 -- accel/accel.sh@21 -- # val= 00:09:27.166 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:09:27.166 10:06:00 -- accel/accel.sh@21 -- # val=0x1 00:09:27.166 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:09:27.166 10:06:00 -- accel/accel.sh@21 -- # val= 00:09:27.166 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:09:27.166 10:06:00 -- accel/accel.sh@21 -- # val= 00:09:27.166 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:09:27.166 10:06:00 -- accel/accel.sh@21 -- # val=decompress 00:09:27.166 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.166 10:06:00 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:09:27.166 10:06:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:27.166 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:09:27.166 10:06:00 -- accel/accel.sh@21 -- # val= 00:09:27.166 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:09:27.166 10:06:00 -- accel/accel.sh@21 -- # val=software 00:09:27.166 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.166 10:06:00 -- accel/accel.sh@23 -- # accel_module=software 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:09:27.166 10:06:00 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:27.166 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:09:27.166 10:06:00 -- accel/accel.sh@21 -- # val=32 00:09:27.166 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:09:27.166 10:06:00 -- accel/accel.sh@21 -- # val=32 00:09:27.166 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:09:27.166 10:06:00 -- accel/accel.sh@21 -- # val=2 00:09:27.166 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:09:27.166 10:06:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:27.166 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:09:27.166 10:06:00 -- accel/accel.sh@21 -- # val=Yes 00:09:27.166 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:09:27.166 10:06:00 -- accel/accel.sh@21 -- # val= 00:09:27.166 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:09:27.166 10:06:00 -- accel/accel.sh@21 -- # val= 00:09:27.166 10:06:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # IFS=: 00:09:27.166 10:06:00 -- accel/accel.sh@20 -- # read -r var val 00:09:28.543 10:06:01 -- accel/accel.sh@21 -- # val= 00:09:28.543 10:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:28.543 10:06:01 -- accel/accel.sh@20 -- # IFS=: 00:09:28.543 10:06:01 -- accel/accel.sh@20 -- # read -r var val 00:09:28.543 10:06:01 -- accel/accel.sh@21 -- # val= 00:09:28.543 10:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:28.543 10:06:01 -- accel/accel.sh@20 -- # IFS=: 00:09:28.543 10:06:01 -- accel/accel.sh@20 -- # read -r var val 00:09:28.543 10:06:01 -- accel/accel.sh@21 -- # val= 00:09:28.543 10:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:28.543 10:06:01 -- accel/accel.sh@20 -- # IFS=: 00:09:28.543 10:06:01 -- accel/accel.sh@20 -- # read -r var val 00:09:28.543 10:06:01 -- accel/accel.sh@21 -- # val= 00:09:28.543 10:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:28.543 10:06:01 -- accel/accel.sh@20 -- # IFS=: 00:09:28.543 10:06:01 -- accel/accel.sh@20 -- # read -r var val 00:09:28.543 10:06:01 -- accel/accel.sh@21 -- # val= 00:09:28.543 10:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:28.543 10:06:01 -- accel/accel.sh@20 -- # IFS=: 00:09:28.543 10:06:01 -- accel/accel.sh@20 -- # read -r var val 00:09:28.543 10:06:01 -- accel/accel.sh@21 -- # val= 00:09:28.543 10:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:28.543 10:06:01 -- accel/accel.sh@20 -- # IFS=: 00:09:28.543 10:06:01 -- accel/accel.sh@20 -- # read -r var val 00:09:28.543 10:06:01 -- accel/accel.sh@21 -- # val= 00:09:28.543 10:06:01 -- accel/accel.sh@22 -- # case "$var" in 00:09:28.543 10:06:01 -- accel/accel.sh@20 -- # IFS=: 00:09:28.543 10:06:01 -- accel/accel.sh@20 -- # read -r var val 00:09:28.543 10:06:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:28.543 10:06:01 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:28.543 10:06:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:28.543 00:09:28.543 real 0m2.827s 00:09:28.543 user 0m2.567s 00:09:28.543 sys 0m0.266s 00:09:28.543 10:06:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.543 10:06:01 -- common/autotest_common.sh@10 -- # set +x 00:09:28.543 ************************************ 00:09:28.543 END TEST accel_decomp_mthread 00:09:28.543 ************************************ 00:09:28.543 10:06:01 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:28.543 10:06:01 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:28.543 10:06:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:28.543 10:06:01 -- common/autotest_common.sh@10 -- # set +x 00:09:28.543 ************************************ 00:09:28.543 START TEST accel_deomp_full_mthread 00:09:28.543 ************************************ 00:09:28.543 10:06:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:28.543 10:06:01 -- accel/accel.sh@16 -- # local accel_opc 00:09:28.543 10:06:01 -- accel/accel.sh@17 -- # local accel_module 00:09:28.543 10:06:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:28.543 10:06:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:28.543 10:06:01 -- accel/accel.sh@12 -- # build_accel_config 00:09:28.543 10:06:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:28.543 10:06:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:28.543 10:06:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:28.543 10:06:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:28.543 10:06:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:28.543 10:06:01 -- accel/accel.sh@41 -- # local IFS=, 00:09:28.543 10:06:01 -- accel/accel.sh@42 -- # jq -r . 00:09:28.543 [2024-04-17 10:06:01.668902] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:28.543 [2024-04-17 10:06:01.668975] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293118 ] 00:09:28.543 EAL: No free 2048 kB hugepages reported on node 1 00:09:28.543 [2024-04-17 10:06:01.750437] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.543 [2024-04-17 10:06:01.834074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.917 10:06:03 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:29.917 00:09:29.917 SPDK Configuration: 00:09:29.917 Core mask: 0x1 00:09:29.917 00:09:29.917 Accel Perf Configuration: 00:09:29.917 Workload Type: decompress 00:09:29.917 Transfer size: 111250 bytes 00:09:29.917 Vector count 1 00:09:29.917 Module: software 00:09:29.917 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:29.917 Queue depth: 32 00:09:29.917 Allocate depth: 32 00:09:29.917 # threads/core: 2 00:09:29.917 Run time: 1 seconds 00:09:29.917 Verify: Yes 00:09:29.917 00:09:29.917 Running for 1 seconds... 00:09:29.917 00:09:29.917 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:29.917 ------------------------------------------------------------------------------------ 00:09:29.917 0,1 1600/s 66 MiB/s 0 0 00:09:29.917 0,0 1600/s 66 MiB/s 0 0 00:09:29.917 ==================================================================================== 00:09:29.917 Total 3200/s 339 MiB/s 0 0' 00:09:29.917 10:06:03 -- accel/accel.sh@20 -- # IFS=: 00:09:29.917 10:06:03 -- accel/accel.sh@20 -- # read -r var val 00:09:29.917 10:06:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:29.917 10:06:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:29.917 10:06:03 -- accel/accel.sh@12 -- # build_accel_config 00:09:29.917 10:06:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:29.917 10:06:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:29.917 10:06:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:29.917 10:06:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:29.917 10:06:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:29.917 10:06:03 -- accel/accel.sh@41 -- # local IFS=, 00:09:29.917 10:06:03 -- accel/accel.sh@42 -- # jq -r . 00:09:29.917 [2024-04-17 10:06:03.109748] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:29.917 [2024-04-17 10:06:03.109813] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293423 ] 00:09:29.917 EAL: No free 2048 kB hugepages reported on node 1 00:09:29.917 [2024-04-17 10:06:03.188510] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.176 [2024-04-17 10:06:03.272313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.176 10:06:03 -- accel/accel.sh@21 -- # val= 00:09:30.176 10:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # IFS=: 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # read -r var val 00:09:30.176 10:06:03 -- accel/accel.sh@21 -- # val= 00:09:30.176 10:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # IFS=: 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # read -r var val 00:09:30.176 10:06:03 -- accel/accel.sh@21 -- # val= 00:09:30.176 10:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # IFS=: 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # read -r var val 00:09:30.176 10:06:03 -- accel/accel.sh@21 -- # val=0x1 00:09:30.176 10:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # IFS=: 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # read -r var val 00:09:30.176 10:06:03 -- accel/accel.sh@21 -- # val= 00:09:30.176 10:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # IFS=: 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # read -r var val 00:09:30.176 10:06:03 -- accel/accel.sh@21 -- # val= 00:09:30.176 10:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # IFS=: 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # read -r var val 00:09:30.176 10:06:03 -- accel/accel.sh@21 -- # val=decompress 00:09:30.176 10:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.176 10:06:03 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # IFS=: 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # read -r var val 00:09:30.176 10:06:03 -- accel/accel.sh@21 -- # val='111250 bytes' 00:09:30.176 10:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # IFS=: 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # read -r var val 00:09:30.176 10:06:03 -- accel/accel.sh@21 -- # val= 00:09:30.176 10:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # IFS=: 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # read -r var val 00:09:30.176 10:06:03 -- accel/accel.sh@21 -- # val=software 00:09:30.176 10:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.176 10:06:03 -- accel/accel.sh@23 -- # accel_module=software 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # IFS=: 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # read -r var val 00:09:30.176 10:06:03 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:09:30.176 10:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # IFS=: 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # read -r var val 00:09:30.176 10:06:03 -- accel/accel.sh@21 -- # val=32 00:09:30.176 10:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # IFS=: 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # read -r var val 00:09:30.176 10:06:03 -- accel/accel.sh@21 -- # val=32 00:09:30.176 10:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # IFS=: 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # read -r var val 00:09:30.176 10:06:03 -- accel/accel.sh@21 -- # val=2 00:09:30.176 10:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # IFS=: 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # read -r var val 00:09:30.176 10:06:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:30.176 10:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # IFS=: 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # read -r var val 00:09:30.176 10:06:03 -- accel/accel.sh@21 -- # val=Yes 00:09:30.176 10:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # IFS=: 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # read -r var val 00:09:30.176 10:06:03 -- accel/accel.sh@21 -- # val= 00:09:30.176 10:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # IFS=: 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # read -r var val 00:09:30.176 10:06:03 -- accel/accel.sh@21 -- # val= 00:09:30.176 10:06:03 -- accel/accel.sh@22 -- # case "$var" in 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # IFS=: 00:09:30.176 10:06:03 -- accel/accel.sh@20 -- # read -r var val 00:09:31.553 10:06:04 -- accel/accel.sh@21 -- # val= 00:09:31.553 10:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:31.553 10:06:04 -- accel/accel.sh@20 -- # IFS=: 00:09:31.553 10:06:04 -- accel/accel.sh@20 -- # read -r var val 00:09:31.553 10:06:04 -- accel/accel.sh@21 -- # val= 00:09:31.553 10:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:31.553 10:06:04 -- accel/accel.sh@20 -- # IFS=: 00:09:31.553 10:06:04 -- accel/accel.sh@20 -- # read -r var val 00:09:31.553 10:06:04 -- accel/accel.sh@21 -- # val= 00:09:31.553 10:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:31.553 10:06:04 -- accel/accel.sh@20 -- # IFS=: 00:09:31.553 10:06:04 -- accel/accel.sh@20 -- # read -r var val 00:09:31.553 10:06:04 -- accel/accel.sh@21 -- # val= 00:09:31.553 10:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:31.553 10:06:04 -- accel/accel.sh@20 -- # IFS=: 00:09:31.553 10:06:04 -- accel/accel.sh@20 -- # read -r var val 00:09:31.553 10:06:04 -- accel/accel.sh@21 -- # val= 00:09:31.553 10:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:31.553 10:06:04 -- accel/accel.sh@20 -- # IFS=: 00:09:31.553 10:06:04 -- accel/accel.sh@20 -- # read -r var val 00:09:31.553 10:06:04 -- accel/accel.sh@21 -- # val= 00:09:31.553 10:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:31.553 10:06:04 -- accel/accel.sh@20 -- # IFS=: 00:09:31.553 10:06:04 -- accel/accel.sh@20 -- # read -r var val 00:09:31.553 10:06:04 -- accel/accel.sh@21 -- # val= 00:09:31.553 10:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:09:31.553 10:06:04 -- accel/accel.sh@20 -- # IFS=: 00:09:31.553 10:06:04 -- accel/accel.sh@20 -- # read -r var val 00:09:31.553 10:06:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:31.553 10:06:04 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:31.553 10:06:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:31.553 00:09:31.553 real 0m2.895s 00:09:31.553 user 0m2.638s 00:09:31.553 sys 0m0.261s 00:09:31.553 10:06:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.553 10:06:04 -- common/autotest_common.sh@10 -- # set +x 00:09:31.553 ************************************ 00:09:31.553 END TEST accel_deomp_full_mthread 00:09:31.553 ************************************ 00:09:31.553 10:06:04 -- accel/accel.sh@116 -- # [[ n == y ]] 00:09:31.553 10:06:04 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:31.553 10:06:04 -- accel/accel.sh@129 -- # build_accel_config 00:09:31.553 10:06:04 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:31.553 10:06:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:31.553 10:06:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:31.553 10:06:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:31.553 10:06:04 -- common/autotest_common.sh@10 -- # set +x 00:09:31.553 10:06:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:31.553 10:06:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:31.553 10:06:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:31.553 10:06:04 -- accel/accel.sh@41 -- # local IFS=, 00:09:31.553 10:06:04 -- accel/accel.sh@42 -- # jq -r . 00:09:31.553 ************************************ 00:09:31.553 START TEST accel_dif_functional_tests 00:09:31.553 ************************************ 00:09:31.553 10:06:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:31.553 [2024-04-17 10:06:04.621675] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:31.553 [2024-04-17 10:06:04.621738] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293703 ] 00:09:31.553 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.553 [2024-04-17 10:06:04.699572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:31.553 [2024-04-17 10:06:04.785375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.553 [2024-04-17 10:06:04.785499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.553 [2024-04-17 10:06:04.785503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.553 00:09:31.553 00:09:31.553 CUnit - A unit testing framework for C - Version 2.1-3 00:09:31.553 http://cunit.sourceforge.net/ 00:09:31.553 00:09:31.553 00:09:31.553 Suite: accel_dif 00:09:31.553 Test: verify: DIF generated, GUARD check ...passed 00:09:31.553 Test: verify: DIF generated, APPTAG check ...passed 00:09:31.553 Test: verify: DIF generated, REFTAG check ...passed 00:09:31.553 Test: verify: DIF not generated, GUARD check ...[2024-04-17 10:06:04.860238] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:31.553 [2024-04-17 10:06:04.860295] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:31.553 passed 00:09:31.553 Test: verify: DIF not generated, APPTAG check ...[2024-04-17 10:06:04.860335] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:31.553 [2024-04-17 10:06:04.860355] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:31.553 passed 00:09:31.553 Test: verify: DIF not generated, REFTAG check ...[2024-04-17 10:06:04.860379] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:31.553 [2024-04-17 10:06:04.860398] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:31.553 passed 00:09:31.553 Test: verify: APPTAG correct, APPTAG check ...passed 00:09:31.553 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-17 10:06:04.860454] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:09:31.553 passed 00:09:31.553 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:09:31.553 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:09:31.553 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:09:31.553 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-17 10:06:04.860595] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:09:31.553 passed 00:09:31.553 Test: generate copy: DIF generated, GUARD check ...passed 00:09:31.553 Test: generate copy: DIF generated, APTTAG check ...passed 00:09:31.553 Test: generate copy: DIF generated, REFTAG check ...passed 00:09:31.553 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:09:31.553 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:09:31.553 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:09:31.553 Test: generate copy: iovecs-len validate ...[2024-04-17 10:06:04.860828] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:09:31.553 passed 00:09:31.553 Test: generate copy: buffer alignment validate ...passed 00:09:31.553 00:09:31.553 Run Summary: Type Total Ran Passed Failed Inactive 00:09:31.553 suites 1 1 n/a 0 0 00:09:31.553 tests 20 20 20 0 0 00:09:31.553 asserts 204 204 204 0 n/a 00:09:31.553 00:09:31.553 Elapsed time = 0.002 seconds 00:09:31.812 00:09:31.812 real 0m0.490s 00:09:31.812 user 0m0.706s 00:09:31.812 sys 0m0.166s 00:09:31.812 10:06:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.812 10:06:05 -- common/autotest_common.sh@10 -- # set +x 00:09:31.812 ************************************ 00:09:31.812 END TEST accel_dif_functional_tests 00:09:31.812 ************************************ 00:09:31.812 00:09:31.812 real 1m0.342s 00:09:31.812 user 1m8.284s 00:09:31.812 sys 0m6.990s 00:09:31.812 10:06:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.812 10:06:05 -- common/autotest_common.sh@10 -- # set +x 00:09:31.812 ************************************ 00:09:31.812 END TEST accel 00:09:31.812 ************************************ 00:09:31.812 10:06:05 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:09:31.812 10:06:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:31.812 10:06:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:31.812 10:06:05 -- common/autotest_common.sh@10 -- # set +x 00:09:32.070 ************************************ 00:09:32.070 START TEST accel_rpc 00:09:32.070 ************************************ 00:09:32.070 10:06:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:09:32.070 * Looking for test storage... 00:09:32.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:09:32.070 10:06:05 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:32.070 10:06:05 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3293901 00:09:32.070 10:06:05 -- accel/accel_rpc.sh@15 -- # waitforlisten 3293901 00:09:32.070 10:06:05 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:09:32.070 10:06:05 -- common/autotest_common.sh@819 -- # '[' -z 3293901 ']' 00:09:32.070 10:06:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.070 10:06:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:32.070 10:06:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.070 10:06:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:32.070 10:06:05 -- common/autotest_common.sh@10 -- # set +x 00:09:32.070 [2024-04-17 10:06:05.283117] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:32.070 [2024-04-17 10:06:05.283181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293901 ] 00:09:32.070 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.070 [2024-04-17 10:06:05.365199] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.329 [2024-04-17 10:06:05.454243] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:32.329 [2024-04-17 10:06:05.454389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.896 10:06:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:32.896 10:06:06 -- common/autotest_common.sh@852 -- # return 0 00:09:32.896 10:06:06 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:09:32.896 10:06:06 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:09:32.896 10:06:06 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:09:32.896 10:06:06 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:09:32.896 10:06:06 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:09:32.896 10:06:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:32.896 10:06:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:32.896 10:06:06 -- common/autotest_common.sh@10 -- # set +x 00:09:32.896 ************************************ 00:09:32.896 START TEST accel_assign_opcode 00:09:32.896 ************************************ 00:09:32.896 10:06:06 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:09:32.896 10:06:06 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:09:32.896 10:06:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:32.896 10:06:06 -- common/autotest_common.sh@10 -- # set +x 00:09:32.896 [2024-04-17 10:06:06.212667] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:09:32.896 10:06:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:32.896 10:06:06 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:09:32.896 10:06:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:32.896 10:06:06 -- common/autotest_common.sh@10 -- # set +x 00:09:32.896 [2024-04-17 10:06:06.220682] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:09:32.896 10:06:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:32.896 10:06:06 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:09:32.896 10:06:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:32.896 10:06:06 -- common/autotest_common.sh@10 -- # set +x 00:09:33.155 10:06:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:33.155 10:06:06 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:09:33.155 10:06:06 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:09:33.155 10:06:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:33.155 10:06:06 -- common/autotest_common.sh@10 -- # set +x 00:09:33.155 10:06:06 -- accel/accel_rpc.sh@42 -- # grep software 00:09:33.155 10:06:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:33.155 software 00:09:33.155 00:09:33.155 real 0m0.251s 00:09:33.155 user 0m0.050s 00:09:33.155 sys 0m0.008s 00:09:33.155 10:06:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.155 10:06:06 -- common/autotest_common.sh@10 -- # set +x 00:09:33.155 ************************************ 00:09:33.155 END TEST accel_assign_opcode 00:09:33.155 ************************************ 00:09:33.412 10:06:06 -- accel/accel_rpc.sh@55 -- # killprocess 3293901 00:09:33.412 10:06:06 -- common/autotest_common.sh@926 -- # '[' -z 3293901 ']' 00:09:33.412 10:06:06 -- common/autotest_common.sh@930 -- # kill -0 3293901 00:09:33.412 10:06:06 -- common/autotest_common.sh@931 -- # uname 00:09:33.413 10:06:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:33.413 10:06:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3293901 00:09:33.413 10:06:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:33.413 10:06:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:33.413 10:06:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3293901' 00:09:33.413 killing process with pid 3293901 00:09:33.413 10:06:06 -- common/autotest_common.sh@945 -- # kill 3293901 00:09:33.413 10:06:06 -- common/autotest_common.sh@950 -- # wait 3293901 00:09:33.671 00:09:33.671 real 0m1.757s 00:09:33.671 user 0m1.916s 00:09:33.671 sys 0m0.441s 00:09:33.671 10:06:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.671 10:06:06 -- common/autotest_common.sh@10 -- # set +x 00:09:33.671 ************************************ 00:09:33.671 END TEST accel_rpc 00:09:33.671 ************************************ 00:09:33.671 10:06:06 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:33.671 10:06:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:33.671 10:06:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:33.671 10:06:06 -- common/autotest_common.sh@10 -- # set +x 00:09:33.671 ************************************ 00:09:33.671 START TEST app_cmdline 00:09:33.671 ************************************ 00:09:33.671 10:06:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:33.930 * Looking for test storage... 00:09:33.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:33.930 10:06:07 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:33.930 10:06:07 -- app/cmdline.sh@17 -- # spdk_tgt_pid=3294360 00:09:33.930 10:06:07 -- app/cmdline.sh@18 -- # waitforlisten 3294360 00:09:33.930 10:06:07 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:33.930 10:06:07 -- common/autotest_common.sh@819 -- # '[' -z 3294360 ']' 00:09:33.930 10:06:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.930 10:06:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:33.930 10:06:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.930 10:06:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:33.930 10:06:07 -- common/autotest_common.sh@10 -- # set +x 00:09:33.930 [2024-04-17 10:06:07.076323] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:33.930 [2024-04-17 10:06:07.076387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294360 ] 00:09:33.930 EAL: No free 2048 kB hugepages reported on node 1 00:09:33.930 [2024-04-17 10:06:07.158483] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.930 [2024-04-17 10:06:07.244512] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:33.930 [2024-04-17 10:06:07.244669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.864 10:06:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:34.864 10:06:07 -- common/autotest_common.sh@852 -- # return 0 00:09:34.864 10:06:07 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:09:34.864 { 00:09:34.864 "version": "SPDK v24.01.1-pre git sha1 36faa8c31", 00:09:34.864 "fields": { 00:09:34.864 "major": 24, 00:09:34.864 "minor": 1, 00:09:34.864 "patch": 1, 00:09:34.864 "suffix": "-pre", 00:09:34.864 "commit": "36faa8c31" 00:09:34.864 } 00:09:34.864 } 00:09:34.864 10:06:08 -- app/cmdline.sh@22 -- # expected_methods=() 00:09:34.864 10:06:08 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:34.864 10:06:08 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:34.864 10:06:08 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:34.864 10:06:08 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:34.864 10:06:08 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:34.864 10:06:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:34.864 10:06:08 -- common/autotest_common.sh@10 -- # set +x 00:09:34.864 10:06:08 -- app/cmdline.sh@26 -- # sort 00:09:34.864 10:06:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:34.864 10:06:08 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:34.864 10:06:08 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:34.864 10:06:08 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:34.864 10:06:08 -- common/autotest_common.sh@640 -- # local es=0 00:09:34.864 10:06:08 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:34.864 10:06:08 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:34.864 10:06:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:34.864 10:06:08 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:34.864 10:06:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:34.864 10:06:08 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:34.864 10:06:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:34.864 10:06:08 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:34.864 10:06:08 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:34.864 10:06:08 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:35.122 request: 00:09:35.122 { 00:09:35.122 "method": "env_dpdk_get_mem_stats", 00:09:35.122 "req_id": 1 00:09:35.122 } 00:09:35.122 Got JSON-RPC error response 00:09:35.122 response: 00:09:35.122 { 00:09:35.122 "code": -32601, 00:09:35.122 "message": "Method not found" 00:09:35.122 } 00:09:35.122 10:06:08 -- common/autotest_common.sh@643 -- # es=1 00:09:35.122 10:06:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:35.122 10:06:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:35.122 10:06:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:35.122 10:06:08 -- app/cmdline.sh@1 -- # killprocess 3294360 00:09:35.122 10:06:08 -- common/autotest_common.sh@926 -- # '[' -z 3294360 ']' 00:09:35.122 10:06:08 -- common/autotest_common.sh@930 -- # kill -0 3294360 00:09:35.123 10:06:08 -- common/autotest_common.sh@931 -- # uname 00:09:35.123 10:06:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:35.123 10:06:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3294360 00:09:35.123 10:06:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:35.123 10:06:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:35.123 10:06:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3294360' 00:09:35.123 killing process with pid 3294360 00:09:35.123 10:06:08 -- common/autotest_common.sh@945 -- # kill 3294360 00:09:35.123 10:06:08 -- common/autotest_common.sh@950 -- # wait 3294360 00:09:35.380 00:09:35.380 real 0m1.743s 00:09:35.380 user 0m2.074s 00:09:35.380 sys 0m0.447s 00:09:35.380 10:06:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:35.380 10:06:08 -- common/autotest_common.sh@10 -- # set +x 00:09:35.380 ************************************ 00:09:35.380 END TEST app_cmdline 00:09:35.380 ************************************ 00:09:35.639 10:06:08 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:35.639 10:06:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:35.639 10:06:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:35.639 10:06:08 -- common/autotest_common.sh@10 -- # set +x 00:09:35.639 ************************************ 00:09:35.639 START TEST version 00:09:35.639 ************************************ 00:09:35.639 10:06:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:35.639 * Looking for test storage... 00:09:35.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:35.640 10:06:08 -- app/version.sh@17 -- # get_header_version major 00:09:35.640 10:06:08 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:35.640 10:06:08 -- app/version.sh@14 -- # cut -f2 00:09:35.640 10:06:08 -- app/version.sh@14 -- # tr -d '"' 00:09:35.640 10:06:08 -- app/version.sh@17 -- # major=24 00:09:35.640 10:06:08 -- app/version.sh@18 -- # get_header_version minor 00:09:35.640 10:06:08 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:35.640 10:06:08 -- app/version.sh@14 -- # cut -f2 00:09:35.640 10:06:08 -- app/version.sh@14 -- # tr -d '"' 00:09:35.640 10:06:08 -- app/version.sh@18 -- # minor=1 00:09:35.640 10:06:08 -- app/version.sh@19 -- # get_header_version patch 00:09:35.640 10:06:08 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:35.640 10:06:08 -- app/version.sh@14 -- # cut -f2 00:09:35.640 10:06:08 -- app/version.sh@14 -- # tr -d '"' 00:09:35.640 10:06:08 -- app/version.sh@19 -- # patch=1 00:09:35.640 10:06:08 -- app/version.sh@20 -- # get_header_version suffix 00:09:35.640 10:06:08 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:35.640 10:06:08 -- app/version.sh@14 -- # cut -f2 00:09:35.640 10:06:08 -- app/version.sh@14 -- # tr -d '"' 00:09:35.640 10:06:08 -- app/version.sh@20 -- # suffix=-pre 00:09:35.640 10:06:08 -- app/version.sh@22 -- # version=24.1 00:09:35.640 10:06:08 -- app/version.sh@25 -- # (( patch != 0 )) 00:09:35.640 10:06:08 -- app/version.sh@25 -- # version=24.1.1 00:09:35.640 10:06:08 -- app/version.sh@28 -- # version=24.1.1rc0 00:09:35.640 10:06:08 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:35.640 10:06:08 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:35.640 10:06:08 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:09:35.640 10:06:08 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:09:35.640 00:09:35.640 real 0m0.162s 00:09:35.640 user 0m0.089s 00:09:35.640 sys 0m0.108s 00:09:35.640 10:06:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:35.640 10:06:08 -- common/autotest_common.sh@10 -- # set +x 00:09:35.640 ************************************ 00:09:35.640 END TEST version 00:09:35.640 ************************************ 00:09:35.640 10:06:08 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:09:35.640 10:06:08 -- spdk/autotest.sh@204 -- # uname -s 00:09:35.640 10:06:08 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:09:35.640 10:06:08 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:09:35.640 10:06:08 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:09:35.640 10:06:08 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:09:35.640 10:06:08 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:09:35.640 10:06:08 -- spdk/autotest.sh@268 -- # timing_exit lib 00:09:35.640 10:06:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:35.640 10:06:08 -- common/autotest_common.sh@10 -- # set +x 00:09:35.640 10:06:08 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:09:35.640 10:06:08 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:09:35.640 10:06:08 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:09:35.640 10:06:08 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:09:35.640 10:06:08 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:09:35.640 10:06:08 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:09:35.640 10:06:08 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:35.640 10:06:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:35.640 10:06:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:35.640 10:06:08 -- common/autotest_common.sh@10 -- # set +x 00:09:35.640 ************************************ 00:09:35.640 START TEST nvmf_tcp 00:09:35.640 ************************************ 00:09:35.640 10:06:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:35.905 * Looking for test storage... 00:09:35.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:35.905 10:06:09 -- nvmf/nvmf.sh@10 -- # uname -s 00:09:35.905 10:06:09 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:35.905 10:06:09 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.906 10:06:09 -- nvmf/common.sh@7 -- # uname -s 00:09:35.906 10:06:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.906 10:06:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.906 10:06:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.906 10:06:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.906 10:06:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.906 10:06:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.906 10:06:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.906 10:06:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.906 10:06:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.906 10:06:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.906 10:06:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:35.906 10:06:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:35.906 10:06:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.906 10:06:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.906 10:06:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.906 10:06:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:35.906 10:06:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.906 10:06:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.906 10:06:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.906 10:06:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.906 10:06:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.906 10:06:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.906 10:06:09 -- paths/export.sh@5 -- # export PATH 00:09:35.906 10:06:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.906 10:06:09 -- nvmf/common.sh@46 -- # : 0 00:09:35.906 10:06:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:35.906 10:06:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:35.906 10:06:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:35.906 10:06:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.906 10:06:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.906 10:06:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:35.906 10:06:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:35.906 10:06:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:35.906 10:06:09 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:35.906 10:06:09 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:09:35.906 10:06:09 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:09:35.906 10:06:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:35.906 10:06:09 -- common/autotest_common.sh@10 -- # set +x 00:09:35.906 10:06:09 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:09:35.906 10:06:09 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:35.906 10:06:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:35.906 10:06:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:35.906 10:06:09 -- common/autotest_common.sh@10 -- # set +x 00:09:35.906 ************************************ 00:09:35.906 START TEST nvmf_example 00:09:35.906 ************************************ 00:09:35.906 10:06:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:35.906 * Looking for test storage... 00:09:35.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:35.906 10:06:09 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.906 10:06:09 -- nvmf/common.sh@7 -- # uname -s 00:09:35.906 10:06:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.906 10:06:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.906 10:06:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.906 10:06:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.906 10:06:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.906 10:06:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.906 10:06:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.906 10:06:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.906 10:06:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.906 10:06:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.906 10:06:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:35.906 10:06:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:35.906 10:06:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.906 10:06:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.906 10:06:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.906 10:06:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:35.906 10:06:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.906 10:06:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.906 10:06:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.906 10:06:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.906 10:06:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.906 10:06:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.906 10:06:09 -- paths/export.sh@5 -- # export PATH 00:09:35.906 10:06:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.906 10:06:09 -- nvmf/common.sh@46 -- # : 0 00:09:35.906 10:06:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:35.906 10:06:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:35.906 10:06:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:35.906 10:06:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.906 10:06:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.906 10:06:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:35.906 10:06:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:35.906 10:06:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:35.906 10:06:09 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:35.906 10:06:09 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:35.906 10:06:09 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:35.906 10:06:09 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:35.906 10:06:09 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:35.906 10:06:09 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:35.906 10:06:09 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:35.906 10:06:09 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:35.906 10:06:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:35.906 10:06:09 -- common/autotest_common.sh@10 -- # set +x 00:09:35.906 10:06:09 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:35.906 10:06:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:35.906 10:06:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.906 10:06:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:35.906 10:06:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:35.906 10:06:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:35.906 10:06:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.906 10:06:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:35.906 10:06:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.906 10:06:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:09:35.906 10:06:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:09:35.906 10:06:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:09:35.906 10:06:09 -- common/autotest_common.sh@10 -- # set +x 00:09:42.526 10:06:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:42.526 10:06:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:42.526 10:06:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:42.526 10:06:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:42.526 10:06:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:42.526 10:06:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:42.526 10:06:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:42.526 10:06:14 -- nvmf/common.sh@294 -- # net_devs=() 00:09:42.526 10:06:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:42.526 10:06:14 -- nvmf/common.sh@295 -- # e810=() 00:09:42.526 10:06:14 -- nvmf/common.sh@295 -- # local -ga e810 00:09:42.526 10:06:14 -- nvmf/common.sh@296 -- # x722=() 00:09:42.526 10:06:14 -- nvmf/common.sh@296 -- # local -ga x722 00:09:42.526 10:06:14 -- nvmf/common.sh@297 -- # mlx=() 00:09:42.526 10:06:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:42.526 10:06:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:42.526 10:06:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:42.526 10:06:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:42.526 10:06:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:42.526 10:06:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:42.526 10:06:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:42.526 10:06:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:42.526 10:06:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:42.526 10:06:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:42.526 10:06:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:42.526 10:06:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:42.526 10:06:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:42.526 10:06:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:09:42.526 10:06:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:09:42.526 10:06:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:09:42.526 10:06:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:09:42.526 10:06:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:42.526 10:06:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:42.526 10:06:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:42.526 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:42.526 10:06:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:42.526 10:06:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:42.526 10:06:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.526 10:06:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.526 10:06:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:42.526 10:06:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:42.526 10:06:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:42.526 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:42.526 10:06:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:42.526 10:06:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:42.526 10:06:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.527 10:06:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.527 10:06:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:42.527 10:06:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:42.527 10:06:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:09:42.527 10:06:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:09:42.527 10:06:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:42.527 10:06:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.527 10:06:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:42.527 10:06:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.527 10:06:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:42.527 Found net devices under 0000:af:00.0: cvl_0_0 00:09:42.527 10:06:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.527 10:06:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:42.527 10:06:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.527 10:06:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:42.527 10:06:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.527 10:06:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:42.527 Found net devices under 0000:af:00.1: cvl_0_1 00:09:42.527 10:06:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.527 10:06:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:42.527 10:06:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:42.527 10:06:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:42.527 10:06:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:09:42.527 10:06:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:09:42.527 10:06:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.527 10:06:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:42.527 10:06:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:42.527 10:06:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:09:42.527 10:06:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:42.527 10:06:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:42.527 10:06:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:09:42.527 10:06:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:42.527 10:06:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.527 10:06:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:09:42.527 10:06:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:09:42.527 10:06:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:09:42.527 10:06:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:42.527 10:06:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:42.527 10:06:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:42.527 10:06:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:09:42.527 10:06:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:42.527 10:06:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:42.527 10:06:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:42.527 10:06:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:09:42.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:09:42.527 00:09:42.527 --- 10.0.0.2 ping statistics --- 00:09:42.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.527 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:09:42.527 10:06:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:42.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:09:42.527 00:09:42.527 --- 10.0.0.1 ping statistics --- 00:09:42.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.527 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:09:42.527 10:06:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.527 10:06:14 -- nvmf/common.sh@410 -- # return 0 00:09:42.527 10:06:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:42.527 10:06:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.527 10:06:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:42.527 10:06:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:42.527 10:06:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.527 10:06:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:42.527 10:06:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:42.527 10:06:14 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:42.527 10:06:14 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:42.527 10:06:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:42.527 10:06:14 -- common/autotest_common.sh@10 -- # set +x 00:09:42.527 10:06:14 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:42.527 10:06:14 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:42.527 10:06:14 -- target/nvmf_example.sh@34 -- # nvmfpid=3298320 00:09:42.527 10:06:14 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:42.527 10:06:14 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:42.527 10:06:14 -- target/nvmf_example.sh@36 -- # waitforlisten 3298320 00:09:42.527 10:06:14 -- common/autotest_common.sh@819 -- # '[' -z 3298320 ']' 00:09:42.527 10:06:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.527 10:06:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:42.527 10:06:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.527 10:06:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:42.527 10:06:14 -- common/autotest_common.sh@10 -- # set +x 00:09:42.527 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.786 10:06:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:42.786 10:06:15 -- common/autotest_common.sh@852 -- # return 0 00:09:42.786 10:06:15 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:42.786 10:06:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:42.786 10:06:15 -- common/autotest_common.sh@10 -- # set +x 00:09:42.786 10:06:15 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:42.786 10:06:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:42.786 10:06:15 -- common/autotest_common.sh@10 -- # set +x 00:09:42.786 10:06:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:42.786 10:06:16 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:42.786 10:06:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:42.786 10:06:16 -- common/autotest_common.sh@10 -- # set +x 00:09:42.786 10:06:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:42.786 10:06:16 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:42.786 10:06:16 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:42.786 10:06:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:42.786 10:06:16 -- common/autotest_common.sh@10 -- # set +x 00:09:42.786 10:06:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:42.786 10:06:16 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:42.786 10:06:16 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:42.786 10:06:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:42.786 10:06:16 -- common/autotest_common.sh@10 -- # set +x 00:09:42.786 10:06:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:42.786 10:06:16 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:42.786 10:06:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:42.786 10:06:16 -- common/autotest_common.sh@10 -- # set +x 00:09:42.786 10:06:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:42.786 10:06:16 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:42.786 10:06:16 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:42.786 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.988 Initializing NVMe Controllers 00:09:54.988 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:54.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:54.988 Initialization complete. Launching workers. 00:09:54.988 ======================================================== 00:09:54.988 Latency(us) 00:09:54.988 Device Information : IOPS MiB/s Average min max 00:09:54.988 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15685.31 61.27 4081.68 1006.84 19273.24 00:09:54.988 ======================================================== 00:09:54.988 Total : 15685.31 61.27 4081.68 1006.84 19273.24 00:09:54.988 00:09:54.988 10:06:26 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:54.988 10:06:26 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:54.988 10:06:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:54.988 10:06:26 -- nvmf/common.sh@116 -- # sync 00:09:54.988 10:06:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:54.988 10:06:26 -- nvmf/common.sh@119 -- # set +e 00:09:54.988 10:06:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:54.988 10:06:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:54.988 rmmod nvme_tcp 00:09:54.988 rmmod nvme_fabrics 00:09:54.988 rmmod nvme_keyring 00:09:54.988 10:06:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:54.988 10:06:26 -- nvmf/common.sh@123 -- # set -e 00:09:54.988 10:06:26 -- nvmf/common.sh@124 -- # return 0 00:09:54.988 10:06:26 -- nvmf/common.sh@477 -- # '[' -n 3298320 ']' 00:09:54.988 10:06:26 -- nvmf/common.sh@478 -- # killprocess 3298320 00:09:54.988 10:06:26 -- common/autotest_common.sh@926 -- # '[' -z 3298320 ']' 00:09:54.988 10:06:26 -- common/autotest_common.sh@930 -- # kill -0 3298320 00:09:54.988 10:06:26 -- common/autotest_common.sh@931 -- # uname 00:09:54.988 10:06:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:54.988 10:06:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3298320 00:09:54.988 10:06:26 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:09:54.988 10:06:26 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:09:54.988 10:06:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3298320' 00:09:54.988 killing process with pid 3298320 00:09:54.988 10:06:26 -- common/autotest_common.sh@945 -- # kill 3298320 00:09:54.988 10:06:26 -- common/autotest_common.sh@950 -- # wait 3298320 00:09:54.988 nvmf threads initialize successfully 00:09:54.988 bdev subsystem init successfully 00:09:54.988 created a nvmf target service 00:09:54.988 create targets's poll groups done 00:09:54.988 all subsystems of target started 00:09:54.988 nvmf target is running 00:09:54.988 all subsystems of target stopped 00:09:54.988 destroy targets's poll groups done 00:09:54.988 destroyed the nvmf target service 00:09:54.988 bdev subsystem finish successfully 00:09:54.988 nvmf threads destroy successfully 00:09:54.988 10:06:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:54.988 10:06:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:54.988 10:06:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:54.988 10:06:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:54.988 10:06:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:54.988 10:06:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.988 10:06:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:54.988 10:06:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.556 10:06:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:09:55.556 10:06:28 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:55.556 10:06:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:55.556 10:06:28 -- common/autotest_common.sh@10 -- # set +x 00:09:55.556 00:09:55.556 real 0m19.678s 00:09:55.556 user 0m46.841s 00:09:55.556 sys 0m5.648s 00:09:55.556 10:06:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:55.556 10:06:28 -- common/autotest_common.sh@10 -- # set +x 00:09:55.556 ************************************ 00:09:55.556 END TEST nvmf_example 00:09:55.556 ************************************ 00:09:55.556 10:06:28 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:55.556 10:06:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:55.556 10:06:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:55.556 10:06:28 -- common/autotest_common.sh@10 -- # set +x 00:09:55.556 ************************************ 00:09:55.556 START TEST nvmf_filesystem 00:09:55.556 ************************************ 00:09:55.556 10:06:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:55.556 * Looking for test storage... 00:09:55.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:55.818 10:06:28 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:55.818 10:06:28 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:55.818 10:06:28 -- common/autotest_common.sh@34 -- # set -e 00:09:55.818 10:06:28 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:55.818 10:06:28 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:55.818 10:06:28 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:55.818 10:06:28 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:55.818 10:06:28 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:55.818 10:06:28 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:55.818 10:06:28 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:55.818 10:06:28 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:55.818 10:06:28 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:55.818 10:06:28 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:55.818 10:06:28 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:55.818 10:06:28 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:55.818 10:06:28 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:55.818 10:06:28 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:55.818 10:06:28 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:55.818 10:06:28 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:55.818 10:06:28 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:55.818 10:06:28 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:55.818 10:06:28 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:55.818 10:06:28 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:55.818 10:06:28 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:55.818 10:06:28 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:55.818 10:06:28 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:55.818 10:06:28 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:55.818 10:06:28 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:55.818 10:06:28 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:55.818 10:06:28 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:55.818 10:06:28 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:55.818 10:06:28 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:55.818 10:06:28 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:55.818 10:06:28 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:55.818 10:06:28 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:55.818 10:06:28 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:55.818 10:06:28 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:55.818 10:06:28 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:55.818 10:06:28 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:55.818 10:06:28 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:55.818 10:06:28 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:55.818 10:06:28 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:55.818 10:06:28 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:55.818 10:06:28 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:55.818 10:06:28 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:55.818 10:06:28 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:55.818 10:06:28 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:55.818 10:06:28 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:55.818 10:06:28 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:55.818 10:06:28 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:55.818 10:06:28 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:55.818 10:06:28 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:55.818 10:06:28 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:09:55.818 10:06:28 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:09:55.818 10:06:28 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:55.818 10:06:28 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:09:55.818 10:06:28 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:09:55.818 10:06:28 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:09:55.818 10:06:28 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:09:55.818 10:06:28 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:09:55.818 10:06:28 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:09:55.818 10:06:28 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:09:55.818 10:06:28 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:09:55.818 10:06:28 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:09:55.818 10:06:28 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:09:55.818 10:06:28 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:09:55.818 10:06:28 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:09:55.819 10:06:28 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:09:55.819 10:06:28 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:09:55.819 10:06:28 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:09:55.819 10:06:28 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:09:55.819 10:06:28 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:09:55.819 10:06:28 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:55.819 10:06:28 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:09:55.819 10:06:28 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:09:55.819 10:06:28 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:09:55.819 10:06:28 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:09:55.819 10:06:28 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:09:55.819 10:06:28 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:09:55.819 10:06:28 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:09:55.819 10:06:28 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:09:55.819 10:06:28 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:09:55.819 10:06:28 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:09:55.819 10:06:28 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:55.819 10:06:28 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:09:55.819 10:06:28 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:09:55.819 10:06:28 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:55.819 10:06:28 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:55.819 10:06:28 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:55.819 10:06:28 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:55.819 10:06:28 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:55.819 10:06:28 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:55.819 10:06:28 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:55.819 10:06:28 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:55.819 10:06:28 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:55.819 10:06:28 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:55.819 10:06:28 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:55.819 10:06:28 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:55.819 10:06:28 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:55.819 10:06:28 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:55.819 10:06:28 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:55.819 10:06:28 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:55.819 #define SPDK_CONFIG_H 00:09:55.819 #define SPDK_CONFIG_APPS 1 00:09:55.819 #define SPDK_CONFIG_ARCH native 00:09:55.819 #undef SPDK_CONFIG_ASAN 00:09:55.819 #undef SPDK_CONFIG_AVAHI 00:09:55.819 #undef SPDK_CONFIG_CET 00:09:55.819 #define SPDK_CONFIG_COVERAGE 1 00:09:55.819 #define SPDK_CONFIG_CROSS_PREFIX 00:09:55.819 #undef SPDK_CONFIG_CRYPTO 00:09:55.819 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:55.819 #undef SPDK_CONFIG_CUSTOMOCF 00:09:55.819 #undef SPDK_CONFIG_DAOS 00:09:55.819 #define SPDK_CONFIG_DAOS_DIR 00:09:55.819 #define SPDK_CONFIG_DEBUG 1 00:09:55.819 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:55.819 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:55.819 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:55.819 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:55.819 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:55.819 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:55.819 #define SPDK_CONFIG_EXAMPLES 1 00:09:55.819 #undef SPDK_CONFIG_FC 00:09:55.819 #define SPDK_CONFIG_FC_PATH 00:09:55.819 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:55.819 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:55.819 #undef SPDK_CONFIG_FUSE 00:09:55.819 #undef SPDK_CONFIG_FUZZER 00:09:55.819 #define SPDK_CONFIG_FUZZER_LIB 00:09:55.819 #undef SPDK_CONFIG_GOLANG 00:09:55.819 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:55.819 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:55.819 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:55.819 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:55.819 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:55.819 #define SPDK_CONFIG_IDXD 1 00:09:55.819 #undef SPDK_CONFIG_IDXD_KERNEL 00:09:55.819 #undef SPDK_CONFIG_IPSEC_MB 00:09:55.819 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:55.819 #define SPDK_CONFIG_ISAL 1 00:09:55.819 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:55.819 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:55.819 #define SPDK_CONFIG_LIBDIR 00:09:55.819 #undef SPDK_CONFIG_LTO 00:09:55.819 #define SPDK_CONFIG_MAX_LCORES 00:09:55.819 #define SPDK_CONFIG_NVME_CUSE 1 00:09:55.819 #undef SPDK_CONFIG_OCF 00:09:55.819 #define SPDK_CONFIG_OCF_PATH 00:09:55.819 #define SPDK_CONFIG_OPENSSL_PATH 00:09:55.819 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:55.819 #undef SPDK_CONFIG_PGO_USE 00:09:55.819 #define SPDK_CONFIG_PREFIX /usr/local 00:09:55.819 #undef SPDK_CONFIG_RAID5F 00:09:55.819 #undef SPDK_CONFIG_RBD 00:09:55.819 #define SPDK_CONFIG_RDMA 1 00:09:55.819 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:55.819 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:55.819 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:55.819 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:55.819 #define SPDK_CONFIG_SHARED 1 00:09:55.819 #undef SPDK_CONFIG_SMA 00:09:55.819 #define SPDK_CONFIG_TESTS 1 00:09:55.819 #undef SPDK_CONFIG_TSAN 00:09:55.819 #define SPDK_CONFIG_UBLK 1 00:09:55.819 #define SPDK_CONFIG_UBSAN 1 00:09:55.819 #undef SPDK_CONFIG_UNIT_TESTS 00:09:55.819 #undef SPDK_CONFIG_URING 00:09:55.819 #define SPDK_CONFIG_URING_PATH 00:09:55.819 #undef SPDK_CONFIG_URING_ZNS 00:09:55.819 #undef SPDK_CONFIG_USDT 00:09:55.819 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:55.819 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:55.819 #undef SPDK_CONFIG_VFIO_USER 00:09:55.819 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:55.819 #define SPDK_CONFIG_VHOST 1 00:09:55.819 #define SPDK_CONFIG_VIRTIO 1 00:09:55.819 #undef SPDK_CONFIG_VTUNE 00:09:55.819 #define SPDK_CONFIG_VTUNE_DIR 00:09:55.819 #define SPDK_CONFIG_WERROR 1 00:09:55.819 #define SPDK_CONFIG_WPDK_DIR 00:09:55.819 #undef SPDK_CONFIG_XNVME 00:09:55.819 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:55.819 10:06:28 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:55.819 10:06:28 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:55.819 10:06:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.819 10:06:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.819 10:06:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.819 10:06:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.819 10:06:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.819 10:06:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.819 10:06:28 -- paths/export.sh@5 -- # export PATH 00:09:55.819 10:06:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.819 10:06:28 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:55.819 10:06:28 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:55.819 10:06:28 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:55.819 10:06:28 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:55.819 10:06:28 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:55.819 10:06:28 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:55.819 10:06:28 -- pm/common@16 -- # TEST_TAG=N/A 00:09:55.819 10:06:28 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:55.819 10:06:28 -- common/autotest_common.sh@52 -- # : 1 00:09:55.819 10:06:28 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:09:55.819 10:06:28 -- common/autotest_common.sh@56 -- # : 0 00:09:55.819 10:06:28 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:55.819 10:06:28 -- common/autotest_common.sh@58 -- # : 0 00:09:55.819 10:06:28 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:09:55.819 10:06:28 -- common/autotest_common.sh@60 -- # : 1 00:09:55.819 10:06:28 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:55.819 10:06:28 -- common/autotest_common.sh@62 -- # : 0 00:09:55.819 10:06:28 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:09:55.819 10:06:28 -- common/autotest_common.sh@64 -- # : 00:09:55.819 10:06:28 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:09:55.819 10:06:28 -- common/autotest_common.sh@66 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:09:55.820 10:06:28 -- common/autotest_common.sh@68 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:09:55.820 10:06:28 -- common/autotest_common.sh@70 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:09:55.820 10:06:28 -- common/autotest_common.sh@72 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:55.820 10:06:28 -- common/autotest_common.sh@74 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:09:55.820 10:06:28 -- common/autotest_common.sh@76 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:09:55.820 10:06:28 -- common/autotest_common.sh@78 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:09:55.820 10:06:28 -- common/autotest_common.sh@80 -- # : 1 00:09:55.820 10:06:28 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:09:55.820 10:06:28 -- common/autotest_common.sh@82 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:09:55.820 10:06:28 -- common/autotest_common.sh@84 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:09:55.820 10:06:28 -- common/autotest_common.sh@86 -- # : 1 00:09:55.820 10:06:28 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:09:55.820 10:06:28 -- common/autotest_common.sh@88 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:09:55.820 10:06:28 -- common/autotest_common.sh@90 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:55.820 10:06:28 -- common/autotest_common.sh@92 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:09:55.820 10:06:28 -- common/autotest_common.sh@94 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:09:55.820 10:06:28 -- common/autotest_common.sh@96 -- # : tcp 00:09:55.820 10:06:28 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:55.820 10:06:28 -- common/autotest_common.sh@98 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:09:55.820 10:06:28 -- common/autotest_common.sh@100 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:09:55.820 10:06:28 -- common/autotest_common.sh@102 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:09:55.820 10:06:28 -- common/autotest_common.sh@104 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:09:55.820 10:06:28 -- common/autotest_common.sh@106 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:09:55.820 10:06:28 -- common/autotest_common.sh@108 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:09:55.820 10:06:28 -- common/autotest_common.sh@110 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:09:55.820 10:06:28 -- common/autotest_common.sh@112 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:55.820 10:06:28 -- common/autotest_common.sh@114 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:09:55.820 10:06:28 -- common/autotest_common.sh@116 -- # : 1 00:09:55.820 10:06:28 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:09:55.820 10:06:28 -- common/autotest_common.sh@118 -- # : 00:09:55.820 10:06:28 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:55.820 10:06:28 -- common/autotest_common.sh@120 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:09:55.820 10:06:28 -- common/autotest_common.sh@122 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:09:55.820 10:06:28 -- common/autotest_common.sh@124 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:09:55.820 10:06:28 -- common/autotest_common.sh@126 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:09:55.820 10:06:28 -- common/autotest_common.sh@128 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:09:55.820 10:06:28 -- common/autotest_common.sh@130 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:09:55.820 10:06:28 -- common/autotest_common.sh@132 -- # : 00:09:55.820 10:06:28 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:09:55.820 10:06:28 -- common/autotest_common.sh@134 -- # : true 00:09:55.820 10:06:28 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:09:55.820 10:06:28 -- common/autotest_common.sh@136 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:09:55.820 10:06:28 -- common/autotest_common.sh@138 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:09:55.820 10:06:28 -- common/autotest_common.sh@140 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:09:55.820 10:06:28 -- common/autotest_common.sh@142 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:09:55.820 10:06:28 -- common/autotest_common.sh@144 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:09:55.820 10:06:28 -- common/autotest_common.sh@146 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:09:55.820 10:06:28 -- common/autotest_common.sh@148 -- # : e810 00:09:55.820 10:06:28 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:09:55.820 10:06:28 -- common/autotest_common.sh@150 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:09:55.820 10:06:28 -- common/autotest_common.sh@152 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:09:55.820 10:06:28 -- common/autotest_common.sh@154 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:09:55.820 10:06:28 -- common/autotest_common.sh@156 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:09:55.820 10:06:28 -- common/autotest_common.sh@158 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:09:55.820 10:06:28 -- common/autotest_common.sh@160 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:09:55.820 10:06:28 -- common/autotest_common.sh@163 -- # : 00:09:55.820 10:06:28 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:09:55.820 10:06:28 -- common/autotest_common.sh@165 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:09:55.820 10:06:28 -- common/autotest_common.sh@167 -- # : 0 00:09:55.820 10:06:28 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:55.820 10:06:28 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:55.820 10:06:28 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:55.820 10:06:28 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:55.820 10:06:28 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:55.820 10:06:28 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:55.820 10:06:28 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:55.820 10:06:28 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:55.820 10:06:28 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:55.820 10:06:28 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:55.820 10:06:28 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:55.820 10:06:28 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:55.820 10:06:28 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:55.820 10:06:28 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:55.820 10:06:28 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:09:55.820 10:06:28 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:55.820 10:06:28 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:55.820 10:06:28 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:55.820 10:06:28 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:55.821 10:06:28 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:55.821 10:06:28 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:09:55.821 10:06:28 -- common/autotest_common.sh@196 -- # cat 00:09:55.821 10:06:28 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:09:55.821 10:06:28 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:55.821 10:06:28 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:55.821 10:06:28 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:55.821 10:06:28 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:55.821 10:06:28 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:09:55.821 10:06:28 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:09:55.821 10:06:28 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:55.821 10:06:28 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:55.821 10:06:28 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:55.821 10:06:28 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:55.821 10:06:28 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:55.821 10:06:28 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:55.821 10:06:28 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:55.821 10:06:28 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:55.821 10:06:28 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:55.821 10:06:28 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:55.821 10:06:28 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:55.821 10:06:28 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:55.821 10:06:28 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:09:55.821 10:06:28 -- common/autotest_common.sh@249 -- # export valgrind= 00:09:55.821 10:06:28 -- common/autotest_common.sh@249 -- # valgrind= 00:09:55.821 10:06:28 -- common/autotest_common.sh@255 -- # uname -s 00:09:55.821 10:06:28 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:09:55.821 10:06:28 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:09:55.821 10:06:28 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:09:55.821 10:06:28 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:09:55.821 10:06:28 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:09:55.821 10:06:28 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:09:55.821 10:06:28 -- common/autotest_common.sh@265 -- # MAKE=make 00:09:55.821 10:06:28 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j112 00:09:55.821 10:06:28 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:09:55.821 10:06:28 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:09:55.821 10:06:28 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:55.821 10:06:28 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:09:55.821 10:06:28 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:09:55.821 10:06:28 -- common/autotest_common.sh@291 -- # for i in "$@" 00:09:55.821 10:06:28 -- common/autotest_common.sh@292 -- # case "$i" in 00:09:55.821 10:06:28 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:09:55.821 10:06:28 -- common/autotest_common.sh@309 -- # [[ -z 3301093 ]] 00:09:55.821 10:06:28 -- common/autotest_common.sh@309 -- # kill -0 3301093 00:09:55.821 10:06:28 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:09:55.821 10:06:28 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:09:55.821 10:06:28 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:09:55.821 10:06:28 -- common/autotest_common.sh@322 -- # local mount target_dir 00:09:55.821 10:06:28 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:09:55.821 10:06:28 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:09:55.821 10:06:28 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:09:55.821 10:06:28 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:09:55.821 10:06:28 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.eP5cjn 00:09:55.821 10:06:28 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:55.821 10:06:28 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:09:55.821 10:06:28 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:09:55.821 10:06:28 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.eP5cjn/tests/target /tmp/spdk.eP5cjn 00:09:55.821 10:06:29 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:09:55.821 10:06:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:09:55.821 10:06:29 -- common/autotest_common.sh@318 -- # df -T 00:09:55.821 10:06:29 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:09:55.821 10:06:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:09:55.821 10:06:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:09:55.821 10:06:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:09:55.821 10:06:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:09:55.821 10:06:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:09:55.821 10:06:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:09:55.821 10:06:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:09:55.821 10:06:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:09:55.821 10:06:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=995520512 00:09:55.821 10:06:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:09:55.821 10:06:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=4288909312 00:09:55.821 10:06:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:09:55.821 10:06:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:09:55.821 10:06:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:09:55.821 10:06:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=82413420544 00:09:55.821 10:06:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=94501478400 00:09:55.821 10:06:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=12088057856 00:09:55.821 10:06:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:09:55.821 10:06:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:09:55.821 10:06:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:09:55.821 10:06:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=47248146432 00:09:55.821 10:06:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=47250739200 00:09:55.821 10:06:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:09:55.821 10:06:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:09:55.821 10:06:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:09:55.821 10:06:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:09:55.821 10:06:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=18890838016 00:09:55.821 10:06:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=18900295680 00:09:55.821 10:06:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=9457664 00:09:55.821 10:06:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:09:55.821 10:06:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:09:55.821 10:06:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:09:55.821 10:06:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=47250284544 00:09:55.821 10:06:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=47250739200 00:09:55.821 10:06:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=454656 00:09:55.821 10:06:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:09:55.821 10:06:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:09:55.821 10:06:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:09:55.821 10:06:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=9450143744 00:09:55.821 10:06:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=9450147840 00:09:55.821 10:06:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:09:55.821 10:06:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:09:55.821 10:06:29 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:09:55.821 10:06:29 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:09:55.821 10:06:29 -- common/autotest_common.sh@353 -- # avails["$mount"]=9450143744 00:09:55.821 10:06:29 -- common/autotest_common.sh@353 -- # sizes["$mount"]=9450147840 00:09:55.821 10:06:29 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:09:55.821 10:06:29 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:09:55.821 10:06:29 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:09:55.821 * Looking for test storage... 00:09:55.821 10:06:29 -- common/autotest_common.sh@359 -- # local target_space new_size 00:09:55.821 10:06:29 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:09:55.821 10:06:29 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:55.821 10:06:29 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:55.821 10:06:29 -- common/autotest_common.sh@363 -- # mount=/ 00:09:55.821 10:06:29 -- common/autotest_common.sh@365 -- # target_space=82413420544 00:09:55.821 10:06:29 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:09:55.821 10:06:29 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:09:55.821 10:06:29 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:09:55.821 10:06:29 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:09:55.821 10:06:29 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:09:55.821 10:06:29 -- common/autotest_common.sh@372 -- # new_size=14302650368 00:09:55.821 10:06:29 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:55.821 10:06:29 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:55.821 10:06:29 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:55.821 10:06:29 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:55.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:55.821 10:06:29 -- common/autotest_common.sh@380 -- # return 0 00:09:55.821 10:06:29 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:09:55.821 10:06:29 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:09:55.822 10:06:29 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:55.822 10:06:29 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:55.822 10:06:29 -- common/autotest_common.sh@1672 -- # true 00:09:55.822 10:06:29 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:09:55.822 10:06:29 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:09:55.822 10:06:29 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:09:55.822 10:06:29 -- common/autotest_common.sh@27 -- # exec 00:09:55.822 10:06:29 -- common/autotest_common.sh@29 -- # exec 00:09:55.822 10:06:29 -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:55.822 10:06:29 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:55.822 10:06:29 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:55.822 10:06:29 -- common/autotest_common.sh@18 -- # set -x 00:09:55.822 10:06:29 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.822 10:06:29 -- nvmf/common.sh@7 -- # uname -s 00:09:55.822 10:06:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.822 10:06:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.822 10:06:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.822 10:06:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.822 10:06:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.822 10:06:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.822 10:06:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.822 10:06:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.822 10:06:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.822 10:06:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.822 10:06:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:55.822 10:06:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:55.822 10:06:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.822 10:06:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.822 10:06:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.822 10:06:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:55.822 10:06:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.822 10:06:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.822 10:06:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.822 10:06:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.822 10:06:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.822 10:06:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.822 10:06:29 -- paths/export.sh@5 -- # export PATH 00:09:55.822 10:06:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.822 10:06:29 -- nvmf/common.sh@46 -- # : 0 00:09:55.822 10:06:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:55.822 10:06:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:55.822 10:06:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:55.822 10:06:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.822 10:06:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.822 10:06:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:55.822 10:06:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:55.822 10:06:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:55.822 10:06:29 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:55.822 10:06:29 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:55.822 10:06:29 -- target/filesystem.sh@15 -- # nvmftestinit 00:09:55.822 10:06:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:55.822 10:06:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.822 10:06:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:55.822 10:06:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:55.822 10:06:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:55.822 10:06:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.822 10:06:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:55.822 10:06:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.822 10:06:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:09:55.822 10:06:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:09:55.822 10:06:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:09:55.822 10:06:29 -- common/autotest_common.sh@10 -- # set +x 00:10:02.385 10:06:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:02.385 10:06:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:10:02.385 10:06:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:10:02.385 10:06:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:10:02.385 10:06:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:10:02.385 10:06:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:10:02.385 10:06:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:10:02.385 10:06:34 -- nvmf/common.sh@294 -- # net_devs=() 00:10:02.385 10:06:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:10:02.385 10:06:34 -- nvmf/common.sh@295 -- # e810=() 00:10:02.385 10:06:34 -- nvmf/common.sh@295 -- # local -ga e810 00:10:02.385 10:06:34 -- nvmf/common.sh@296 -- # x722=() 00:10:02.385 10:06:34 -- nvmf/common.sh@296 -- # local -ga x722 00:10:02.385 10:06:34 -- nvmf/common.sh@297 -- # mlx=() 00:10:02.385 10:06:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:10:02.385 10:06:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:02.385 10:06:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:02.385 10:06:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:02.385 10:06:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:02.385 10:06:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:02.385 10:06:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:02.385 10:06:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:02.385 10:06:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:02.385 10:06:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:02.385 10:06:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:02.385 10:06:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:02.385 10:06:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:10:02.385 10:06:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:10:02.385 10:06:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:10:02.385 10:06:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:10:02.385 10:06:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:10:02.385 10:06:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:10:02.385 10:06:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:10:02.385 10:06:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:02.385 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:02.385 10:06:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:10:02.385 10:06:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:10:02.385 10:06:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.385 10:06:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.385 10:06:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:10:02.385 10:06:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:10:02.385 10:06:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:02.385 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:02.385 10:06:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:10:02.385 10:06:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:10:02.385 10:06:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.385 10:06:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.385 10:06:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:10:02.385 10:06:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:10:02.385 10:06:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:10:02.385 10:06:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:10:02.385 10:06:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:10:02.385 10:06:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.385 10:06:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:10:02.385 10:06:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.385 10:06:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:02.385 Found net devices under 0000:af:00.0: cvl_0_0 00:10:02.385 10:06:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.385 10:06:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:10:02.385 10:06:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.385 10:06:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:10:02.385 10:06:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.385 10:06:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:02.385 Found net devices under 0000:af:00.1: cvl_0_1 00:10:02.385 10:06:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.385 10:06:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:10:02.385 10:06:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:10:02.385 10:06:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:10:02.385 10:06:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:10:02.385 10:06:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:10:02.386 10:06:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:02.386 10:06:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:02.386 10:06:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:02.386 10:06:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:10:02.386 10:06:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:02.386 10:06:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:02.386 10:06:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:10:02.386 10:06:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:02.386 10:06:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:02.386 10:06:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:10:02.386 10:06:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:10:02.386 10:06:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:10:02.386 10:06:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:02.386 10:06:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:02.386 10:06:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:02.386 10:06:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:10:02.386 10:06:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:02.386 10:06:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:02.386 10:06:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:02.386 10:06:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:10:02.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:10:02.386 00:10:02.386 --- 10.0.0.2 ping statistics --- 00:10:02.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.386 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:10:02.386 10:06:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:02.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:10:02.386 00:10:02.386 --- 10.0.0.1 ping statistics --- 00:10:02.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.386 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:10:02.386 10:06:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.386 10:06:34 -- nvmf/common.sh@410 -- # return 0 00:10:02.386 10:06:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:02.386 10:06:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.386 10:06:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:02.386 10:06:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:02.386 10:06:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.386 10:06:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:02.386 10:06:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:02.386 10:06:34 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:02.386 10:06:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:02.386 10:06:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:02.386 10:06:34 -- common/autotest_common.sh@10 -- # set +x 00:10:02.386 ************************************ 00:10:02.386 START TEST nvmf_filesystem_no_in_capsule 00:10:02.386 ************************************ 00:10:02.386 10:06:34 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:10:02.386 10:06:34 -- target/filesystem.sh@47 -- # in_capsule=0 00:10:02.386 10:06:34 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:02.386 10:06:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:02.386 10:06:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:02.386 10:06:34 -- common/autotest_common.sh@10 -- # set +x 00:10:02.386 10:06:34 -- nvmf/common.sh@469 -- # nvmfpid=3304266 00:10:02.386 10:06:34 -- nvmf/common.sh@470 -- # waitforlisten 3304266 00:10:02.386 10:06:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:02.386 10:06:34 -- common/autotest_common.sh@819 -- # '[' -z 3304266 ']' 00:10:02.386 10:06:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.386 10:06:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:02.386 10:06:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.386 10:06:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:02.386 10:06:34 -- common/autotest_common.sh@10 -- # set +x 00:10:02.386 [2024-04-17 10:06:34.857232] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:02.386 [2024-04-17 10:06:34.857287] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.386 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.386 [2024-04-17 10:06:34.944119] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:02.386 [2024-04-17 10:06:35.031845] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:02.386 [2024-04-17 10:06:35.031994] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.386 [2024-04-17 10:06:35.032006] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.386 [2024-04-17 10:06:35.032015] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.386 [2024-04-17 10:06:35.032117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.386 [2024-04-17 10:06:35.032217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.386 [2024-04-17 10:06:35.032335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:02.386 [2024-04-17 10:06:35.032335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.645 10:06:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:02.645 10:06:35 -- common/autotest_common.sh@852 -- # return 0 00:10:02.645 10:06:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:02.645 10:06:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:02.645 10:06:35 -- common/autotest_common.sh@10 -- # set +x 00:10:02.645 10:06:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.645 10:06:35 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:02.645 10:06:35 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:02.645 10:06:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:02.645 10:06:35 -- common/autotest_common.sh@10 -- # set +x 00:10:02.645 [2024-04-17 10:06:35.834502] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:02.646 10:06:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:02.646 10:06:35 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:02.646 10:06:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:02.646 10:06:35 -- common/autotest_common.sh@10 -- # set +x 00:10:02.646 Malloc1 00:10:02.646 10:06:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:02.646 10:06:35 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:02.646 10:06:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:02.646 10:06:35 -- common/autotest_common.sh@10 -- # set +x 00:10:02.646 10:06:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:02.905 10:06:35 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:02.905 10:06:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:02.905 10:06:35 -- common/autotest_common.sh@10 -- # set +x 00:10:02.905 10:06:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:02.905 10:06:35 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:02.905 10:06:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:02.905 10:06:35 -- common/autotest_common.sh@10 -- # set +x 00:10:02.905 [2024-04-17 10:06:35.990554] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:02.905 10:06:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:02.905 10:06:35 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:02.905 10:06:35 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:10:02.905 10:06:35 -- common/autotest_common.sh@1358 -- # local bdev_info 00:10:02.905 10:06:35 -- common/autotest_common.sh@1359 -- # local bs 00:10:02.905 10:06:35 -- common/autotest_common.sh@1360 -- # local nb 00:10:02.905 10:06:35 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:02.905 10:06:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:02.905 10:06:36 -- common/autotest_common.sh@10 -- # set +x 00:10:02.905 10:06:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:02.905 10:06:36 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:10:02.905 { 00:10:02.905 "name": "Malloc1", 00:10:02.905 "aliases": [ 00:10:02.905 "0b054d4b-3516-4e84-88b8-33cf7d5e6270" 00:10:02.905 ], 00:10:02.905 "product_name": "Malloc disk", 00:10:02.905 "block_size": 512, 00:10:02.905 "num_blocks": 1048576, 00:10:02.905 "uuid": "0b054d4b-3516-4e84-88b8-33cf7d5e6270", 00:10:02.905 "assigned_rate_limits": { 00:10:02.905 "rw_ios_per_sec": 0, 00:10:02.905 "rw_mbytes_per_sec": 0, 00:10:02.905 "r_mbytes_per_sec": 0, 00:10:02.905 "w_mbytes_per_sec": 0 00:10:02.905 }, 00:10:02.905 "claimed": true, 00:10:02.905 "claim_type": "exclusive_write", 00:10:02.905 "zoned": false, 00:10:02.905 "supported_io_types": { 00:10:02.905 "read": true, 00:10:02.905 "write": true, 00:10:02.905 "unmap": true, 00:10:02.905 "write_zeroes": true, 00:10:02.905 "flush": true, 00:10:02.905 "reset": true, 00:10:02.905 "compare": false, 00:10:02.905 "compare_and_write": false, 00:10:02.905 "abort": true, 00:10:02.905 "nvme_admin": false, 00:10:02.905 "nvme_io": false 00:10:02.905 }, 00:10:02.905 "memory_domains": [ 00:10:02.905 { 00:10:02.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.905 "dma_device_type": 2 00:10:02.905 } 00:10:02.905 ], 00:10:02.905 "driver_specific": {} 00:10:02.905 } 00:10:02.905 ]' 00:10:02.905 10:06:36 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:10:02.905 10:06:36 -- common/autotest_common.sh@1362 -- # bs=512 00:10:02.905 10:06:36 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:10:02.905 10:06:36 -- common/autotest_common.sh@1363 -- # nb=1048576 00:10:02.905 10:06:36 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:10:02.905 10:06:36 -- common/autotest_common.sh@1367 -- # echo 512 00:10:02.905 10:06:36 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:02.905 10:06:36 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:04.281 10:06:37 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:04.281 10:06:37 -- common/autotest_common.sh@1177 -- # local i=0 00:10:04.281 10:06:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:10:04.281 10:06:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:10:04.281 10:06:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:10:06.196 10:06:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:10:06.196 10:06:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:10:06.196 10:06:39 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:10:06.196 10:06:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:10:06.196 10:06:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:10:06.196 10:06:39 -- common/autotest_common.sh@1187 -- # return 0 00:10:06.196 10:06:39 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:06.196 10:06:39 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:06.196 10:06:39 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:06.196 10:06:39 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:06.196 10:06:39 -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:06.196 10:06:39 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:06.196 10:06:39 -- setup/common.sh@80 -- # echo 536870912 00:10:06.196 10:06:39 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:06.196 10:06:39 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:06.196 10:06:39 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:06.196 10:06:39 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:06.454 10:06:39 -- target/filesystem.sh@69 -- # partprobe 00:10:07.021 10:06:40 -- target/filesystem.sh@70 -- # sleep 1 00:10:07.956 10:06:41 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:07.956 10:06:41 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:07.956 10:06:41 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:10:07.956 10:06:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:07.956 10:06:41 -- common/autotest_common.sh@10 -- # set +x 00:10:07.956 ************************************ 00:10:07.956 START TEST filesystem_ext4 00:10:07.956 ************************************ 00:10:07.956 10:06:41 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:07.956 10:06:41 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:07.956 10:06:41 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:07.956 10:06:41 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:07.956 10:06:41 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:10:07.956 10:06:41 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:10:07.956 10:06:41 -- common/autotest_common.sh@904 -- # local i=0 00:10:07.956 10:06:41 -- common/autotest_common.sh@905 -- # local force 00:10:07.956 10:06:41 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:10:07.956 10:06:41 -- common/autotest_common.sh@908 -- # force=-F 00:10:07.956 10:06:41 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:07.956 mke2fs 1.46.5 (30-Dec-2021) 00:10:07.956 Discarding device blocks: 0/522240 done 00:10:07.956 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:07.956 Filesystem UUID: 7d49f0a9-e311-4487-8e09-8e9215e324f4 00:10:07.956 Superblock backups stored on blocks: 00:10:07.956 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:07.956 00:10:07.956 Allocating group tables: 0/64 done 00:10:07.956 Writing inode tables: 0/64 done 00:10:08.213 Creating journal (8192 blocks): done 00:10:09.038 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:10:09.038 00:10:09.038 10:06:42 -- common/autotest_common.sh@921 -- # return 0 00:10:09.038 10:06:42 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:09.604 10:06:42 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:09.861 10:06:42 -- target/filesystem.sh@25 -- # sync 00:10:09.861 10:06:42 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:09.861 10:06:42 -- target/filesystem.sh@27 -- # sync 00:10:09.862 10:06:42 -- target/filesystem.sh@29 -- # i=0 00:10:09.862 10:06:42 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:09.862 10:06:42 -- target/filesystem.sh@37 -- # kill -0 3304266 00:10:09.862 10:06:42 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:09.862 10:06:42 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:09.862 10:06:42 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:09.862 10:06:42 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:09.862 00:10:09.862 real 0m1.896s 00:10:09.862 user 0m0.021s 00:10:09.862 sys 0m0.070s 00:10:09.862 10:06:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:09.862 10:06:43 -- common/autotest_common.sh@10 -- # set +x 00:10:09.862 ************************************ 00:10:09.862 END TEST filesystem_ext4 00:10:09.862 ************************************ 00:10:09.862 10:06:43 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:09.862 10:06:43 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:10:09.862 10:06:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:09.862 10:06:43 -- common/autotest_common.sh@10 -- # set +x 00:10:09.862 ************************************ 00:10:09.862 START TEST filesystem_btrfs 00:10:09.862 ************************************ 00:10:09.862 10:06:43 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:09.862 10:06:43 -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:09.862 10:06:43 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:09.862 10:06:43 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:09.862 10:06:43 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:10:09.862 10:06:43 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:10:09.862 10:06:43 -- common/autotest_common.sh@904 -- # local i=0 00:10:09.862 10:06:43 -- common/autotest_common.sh@905 -- # local force 00:10:09.862 10:06:43 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:10:09.862 10:06:43 -- common/autotest_common.sh@910 -- # force=-f 00:10:09.862 10:06:43 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:10.121 btrfs-progs v6.6.2 00:10:10.121 See https://btrfs.readthedocs.io for more information. 00:10:10.121 00:10:10.121 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:10.121 NOTE: several default settings have changed in version 5.15, please make sure 00:10:10.121 this does not affect your deployments: 00:10:10.121 - DUP for metadata (-m dup) 00:10:10.121 - enabled no-holes (-O no-holes) 00:10:10.121 - enabled free-space-tree (-R free-space-tree) 00:10:10.121 00:10:10.121 Label: (null) 00:10:10.121 UUID: 6f081a2d-c304-4b1b-a47f-01b34237c191 00:10:10.121 Node size: 16384 00:10:10.121 Sector size: 4096 00:10:10.121 Filesystem size: 510.00MiB 00:10:10.121 Block group profiles: 00:10:10.121 Data: single 8.00MiB 00:10:10.121 Metadata: DUP 32.00MiB 00:10:10.121 System: DUP 8.00MiB 00:10:10.121 SSD detected: yes 00:10:10.121 Zoned device: no 00:10:10.121 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:10.121 Runtime features: free-space-tree 00:10:10.121 Checksum: crc32c 00:10:10.121 Number of devices: 1 00:10:10.121 Devices: 00:10:10.121 ID SIZE PATH 00:10:10.121 1 510.00MiB /dev/nvme0n1p1 00:10:10.121 00:10:10.121 10:06:43 -- common/autotest_common.sh@921 -- # return 0 00:10:10.121 10:06:43 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:11.057 10:06:44 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:11.057 10:06:44 -- target/filesystem.sh@25 -- # sync 00:10:11.057 10:06:44 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:11.057 10:06:44 -- target/filesystem.sh@27 -- # sync 00:10:11.057 10:06:44 -- target/filesystem.sh@29 -- # i=0 00:10:11.057 10:06:44 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:11.057 10:06:44 -- target/filesystem.sh@37 -- # kill -0 3304266 00:10:11.057 10:06:44 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:11.057 10:06:44 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:11.057 10:06:44 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:11.057 10:06:44 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:11.057 00:10:11.057 real 0m1.226s 00:10:11.057 user 0m0.028s 00:10:11.057 sys 0m0.122s 00:10:11.057 10:06:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.057 10:06:44 -- common/autotest_common.sh@10 -- # set +x 00:10:11.057 ************************************ 00:10:11.057 END TEST filesystem_btrfs 00:10:11.057 ************************************ 00:10:11.057 10:06:44 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:11.057 10:06:44 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:10:11.057 10:06:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:11.057 10:06:44 -- common/autotest_common.sh@10 -- # set +x 00:10:11.057 ************************************ 00:10:11.057 START TEST filesystem_xfs 00:10:11.057 ************************************ 00:10:11.057 10:06:44 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:10:11.057 10:06:44 -- target/filesystem.sh@18 -- # fstype=xfs 00:10:11.057 10:06:44 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:11.057 10:06:44 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:11.057 10:06:44 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:10:11.057 10:06:44 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:10:11.057 10:06:44 -- common/autotest_common.sh@904 -- # local i=0 00:10:11.057 10:06:44 -- common/autotest_common.sh@905 -- # local force 00:10:11.057 10:06:44 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:10:11.057 10:06:44 -- common/autotest_common.sh@910 -- # force=-f 00:10:11.057 10:06:44 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:11.316 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:11.316 = sectsz=512 attr=2, projid32bit=1 00:10:11.316 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:11.316 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:11.316 data = bsize=4096 blocks=130560, imaxpct=25 00:10:11.316 = sunit=0 swidth=0 blks 00:10:11.316 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:11.316 log =internal log bsize=4096 blocks=16384, version=2 00:10:11.316 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:11.316 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:11.882 Discarding blocks...Done. 00:10:11.882 10:06:45 -- common/autotest_common.sh@921 -- # return 0 00:10:11.882 10:06:45 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:14.414 10:06:47 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:14.414 10:06:47 -- target/filesystem.sh@25 -- # sync 00:10:14.414 10:06:47 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:14.414 10:06:47 -- target/filesystem.sh@27 -- # sync 00:10:14.414 10:06:47 -- target/filesystem.sh@29 -- # i=0 00:10:14.414 10:06:47 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:14.414 10:06:47 -- target/filesystem.sh@37 -- # kill -0 3304266 00:10:14.414 10:06:47 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:14.414 10:06:47 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:14.672 10:06:47 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:14.672 10:06:47 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:14.672 00:10:14.672 real 0m3.445s 00:10:14.672 user 0m0.023s 00:10:14.672 sys 0m0.072s 00:10:14.672 10:06:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:14.672 10:06:47 -- common/autotest_common.sh@10 -- # set +x 00:10:14.672 ************************************ 00:10:14.672 END TEST filesystem_xfs 00:10:14.672 ************************************ 00:10:14.672 10:06:47 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:14.672 10:06:47 -- target/filesystem.sh@93 -- # sync 00:10:14.672 10:06:47 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:14.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.672 10:06:47 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:14.672 10:06:47 -- common/autotest_common.sh@1198 -- # local i=0 00:10:14.672 10:06:47 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:10:14.672 10:06:47 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.672 10:06:47 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:14.672 10:06:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.672 10:06:47 -- common/autotest_common.sh@1210 -- # return 0 00:10:14.672 10:06:47 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:14.672 10:06:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:14.672 10:06:47 -- common/autotest_common.sh@10 -- # set +x 00:10:14.672 10:06:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:14.672 10:06:48 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:14.672 10:06:48 -- target/filesystem.sh@101 -- # killprocess 3304266 00:10:14.672 10:06:48 -- common/autotest_common.sh@926 -- # '[' -z 3304266 ']' 00:10:14.672 10:06:48 -- common/autotest_common.sh@930 -- # kill -0 3304266 00:10:14.673 10:06:48 -- common/autotest_common.sh@931 -- # uname 00:10:14.931 10:06:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:14.931 10:06:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3304266 00:10:14.931 10:06:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:14.931 10:06:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:14.931 10:06:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3304266' 00:10:14.931 killing process with pid 3304266 00:10:14.931 10:06:48 -- common/autotest_common.sh@945 -- # kill 3304266 00:10:14.931 10:06:48 -- common/autotest_common.sh@950 -- # wait 3304266 00:10:15.190 10:06:48 -- target/filesystem.sh@102 -- # nvmfpid= 00:10:15.190 00:10:15.190 real 0m13.647s 00:10:15.190 user 0m53.469s 00:10:15.190 sys 0m1.269s 00:10:15.190 10:06:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:15.190 10:06:48 -- common/autotest_common.sh@10 -- # set +x 00:10:15.190 ************************************ 00:10:15.190 END TEST nvmf_filesystem_no_in_capsule 00:10:15.190 ************************************ 00:10:15.190 10:06:48 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:15.190 10:06:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:15.190 10:06:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:15.190 10:06:48 -- common/autotest_common.sh@10 -- # set +x 00:10:15.190 ************************************ 00:10:15.190 START TEST nvmf_filesystem_in_capsule 00:10:15.190 ************************************ 00:10:15.190 10:06:48 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:10:15.190 10:06:48 -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:15.190 10:06:48 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:15.190 10:06:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:15.190 10:06:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:15.190 10:06:48 -- common/autotest_common.sh@10 -- # set +x 00:10:15.190 10:06:48 -- nvmf/common.sh@469 -- # nvmfpid=3306896 00:10:15.190 10:06:48 -- nvmf/common.sh@470 -- # waitforlisten 3306896 00:10:15.190 10:06:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:15.190 10:06:48 -- common/autotest_common.sh@819 -- # '[' -z 3306896 ']' 00:10:15.190 10:06:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.190 10:06:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:15.190 10:06:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.190 10:06:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:15.190 10:06:48 -- common/autotest_common.sh@10 -- # set +x 00:10:15.448 [2024-04-17 10:06:48.548246] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:15.449 [2024-04-17 10:06:48.548304] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.449 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.449 [2024-04-17 10:06:48.637105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:15.449 [2024-04-17 10:06:48.725876] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:15.449 [2024-04-17 10:06:48.726025] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.449 [2024-04-17 10:06:48.726036] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.449 [2024-04-17 10:06:48.726046] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.449 [2024-04-17 10:06:48.726106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.449 [2024-04-17 10:06:48.726208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.449 [2024-04-17 10:06:48.726300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.449 [2024-04-17 10:06:48.726300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:16.384 10:06:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:16.384 10:06:49 -- common/autotest_common.sh@852 -- # return 0 00:10:16.384 10:06:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:16.384 10:06:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:16.384 10:06:49 -- common/autotest_common.sh@10 -- # set +x 00:10:16.384 10:06:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.384 10:06:49 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:16.384 10:06:49 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:16.384 10:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:16.384 10:06:49 -- common/autotest_common.sh@10 -- # set +x 00:10:16.384 [2024-04-17 10:06:49.460221] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.384 10:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:16.384 10:06:49 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:16.384 10:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:16.384 10:06:49 -- common/autotest_common.sh@10 -- # set +x 00:10:16.384 Malloc1 00:10:16.384 10:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:16.384 10:06:49 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:16.384 10:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:16.384 10:06:49 -- common/autotest_common.sh@10 -- # set +x 00:10:16.384 10:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:16.384 10:06:49 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:16.384 10:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:16.384 10:06:49 -- common/autotest_common.sh@10 -- # set +x 00:10:16.384 10:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:16.384 10:06:49 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.384 10:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:16.384 10:06:49 -- common/autotest_common.sh@10 -- # set +x 00:10:16.384 [2024-04-17 10:06:49.611025] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.384 10:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:16.384 10:06:49 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:16.384 10:06:49 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:10:16.384 10:06:49 -- common/autotest_common.sh@1358 -- # local bdev_info 00:10:16.384 10:06:49 -- common/autotest_common.sh@1359 -- # local bs 00:10:16.384 10:06:49 -- common/autotest_common.sh@1360 -- # local nb 00:10:16.384 10:06:49 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:16.384 10:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:16.384 10:06:49 -- common/autotest_common.sh@10 -- # set +x 00:10:16.384 10:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:16.384 10:06:49 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:10:16.384 { 00:10:16.384 "name": "Malloc1", 00:10:16.384 "aliases": [ 00:10:16.384 "a8234308-b620-4444-8510-69f0bb16c62b" 00:10:16.384 ], 00:10:16.384 "product_name": "Malloc disk", 00:10:16.384 "block_size": 512, 00:10:16.384 "num_blocks": 1048576, 00:10:16.384 "uuid": "a8234308-b620-4444-8510-69f0bb16c62b", 00:10:16.384 "assigned_rate_limits": { 00:10:16.384 "rw_ios_per_sec": 0, 00:10:16.384 "rw_mbytes_per_sec": 0, 00:10:16.384 "r_mbytes_per_sec": 0, 00:10:16.384 "w_mbytes_per_sec": 0 00:10:16.384 }, 00:10:16.384 "claimed": true, 00:10:16.384 "claim_type": "exclusive_write", 00:10:16.384 "zoned": false, 00:10:16.384 "supported_io_types": { 00:10:16.384 "read": true, 00:10:16.384 "write": true, 00:10:16.384 "unmap": true, 00:10:16.384 "write_zeroes": true, 00:10:16.384 "flush": true, 00:10:16.384 "reset": true, 00:10:16.384 "compare": false, 00:10:16.384 "compare_and_write": false, 00:10:16.384 "abort": true, 00:10:16.384 "nvme_admin": false, 00:10:16.384 "nvme_io": false 00:10:16.384 }, 00:10:16.384 "memory_domains": [ 00:10:16.384 { 00:10:16.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.384 "dma_device_type": 2 00:10:16.384 } 00:10:16.384 ], 00:10:16.384 "driver_specific": {} 00:10:16.384 } 00:10:16.384 ]' 00:10:16.384 10:06:49 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:10:16.384 10:06:49 -- common/autotest_common.sh@1362 -- # bs=512 00:10:16.384 10:06:49 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:10:16.643 10:06:49 -- common/autotest_common.sh@1363 -- # nb=1048576 00:10:16.643 10:06:49 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:10:16.643 10:06:49 -- common/autotest_common.sh@1367 -- # echo 512 00:10:16.643 10:06:49 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:16.643 10:06:49 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:18.019 10:06:50 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:18.019 10:06:50 -- common/autotest_common.sh@1177 -- # local i=0 00:10:18.019 10:06:50 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:10:18.019 10:06:50 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:10:18.019 10:06:50 -- common/autotest_common.sh@1184 -- # sleep 2 00:10:19.922 10:06:52 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:10:19.922 10:06:52 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:10:19.922 10:06:52 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:10:19.922 10:06:53 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:10:19.922 10:06:53 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:10:19.922 10:06:53 -- common/autotest_common.sh@1187 -- # return 0 00:10:19.922 10:06:53 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:19.922 10:06:53 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:19.922 10:06:53 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:19.922 10:06:53 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:19.922 10:06:53 -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:19.922 10:06:53 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:19.922 10:06:53 -- setup/common.sh@80 -- # echo 536870912 00:10:19.922 10:06:53 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:19.922 10:06:53 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:19.922 10:06:53 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:19.922 10:06:53 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:19.922 10:06:53 -- target/filesystem.sh@69 -- # partprobe 00:10:20.489 10:06:53 -- target/filesystem.sh@70 -- # sleep 1 00:10:21.424 10:06:54 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:21.424 10:06:54 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:21.424 10:06:54 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:10:21.424 10:06:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:21.424 10:06:54 -- common/autotest_common.sh@10 -- # set +x 00:10:21.424 ************************************ 00:10:21.424 START TEST filesystem_in_capsule_ext4 00:10:21.424 ************************************ 00:10:21.424 10:06:54 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:21.424 10:06:54 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:21.424 10:06:54 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:21.424 10:06:54 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:21.683 10:06:54 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:10:21.683 10:06:54 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:10:21.683 10:06:54 -- common/autotest_common.sh@904 -- # local i=0 00:10:21.683 10:06:54 -- common/autotest_common.sh@905 -- # local force 00:10:21.683 10:06:54 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:10:21.683 10:06:54 -- common/autotest_common.sh@908 -- # force=-F 00:10:21.683 10:06:54 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:21.683 mke2fs 1.46.5 (30-Dec-2021) 00:10:21.683 Discarding device blocks: 0/522240 done 00:10:21.683 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:21.683 Filesystem UUID: 29ae8832-11f6-4ac5-9878-df116115538f 00:10:21.683 Superblock backups stored on blocks: 00:10:21.683 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:21.683 00:10:21.683 Allocating group tables: 0/64 done 00:10:21.683 Writing inode tables: 0/64 done 00:10:23.061 Creating journal (8192 blocks): done 00:10:24.044 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:10:24.044 00:10:24.044 10:06:57 -- common/autotest_common.sh@921 -- # return 0 00:10:24.044 10:06:57 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:24.612 10:06:57 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:24.871 10:06:57 -- target/filesystem.sh@25 -- # sync 00:10:24.871 10:06:57 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:24.871 10:06:57 -- target/filesystem.sh@27 -- # sync 00:10:24.871 10:06:57 -- target/filesystem.sh@29 -- # i=0 00:10:24.871 10:06:57 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:24.871 10:06:58 -- target/filesystem.sh@37 -- # kill -0 3306896 00:10:24.871 10:06:58 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:24.871 10:06:58 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:24.871 10:06:58 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:24.871 10:06:58 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:24.871 00:10:24.871 real 0m3.283s 00:10:24.871 user 0m0.019s 00:10:24.871 sys 0m0.071s 00:10:24.871 10:06:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:24.871 10:06:58 -- common/autotest_common.sh@10 -- # set +x 00:10:24.871 ************************************ 00:10:24.871 END TEST filesystem_in_capsule_ext4 00:10:24.871 ************************************ 00:10:24.871 10:06:58 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:24.871 10:06:58 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:10:24.871 10:06:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:24.871 10:06:58 -- common/autotest_common.sh@10 -- # set +x 00:10:24.871 ************************************ 00:10:24.872 START TEST filesystem_in_capsule_btrfs 00:10:24.872 ************************************ 00:10:24.872 10:06:58 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:24.872 10:06:58 -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:24.872 10:06:58 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:24.872 10:06:58 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:24.872 10:06:58 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:10:24.872 10:06:58 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:10:24.872 10:06:58 -- common/autotest_common.sh@904 -- # local i=0 00:10:24.872 10:06:58 -- common/autotest_common.sh@905 -- # local force 00:10:24.872 10:06:58 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:10:24.872 10:06:58 -- common/autotest_common.sh@910 -- # force=-f 00:10:24.872 10:06:58 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:25.439 btrfs-progs v6.6.2 00:10:25.439 See https://btrfs.readthedocs.io for more information. 00:10:25.439 00:10:25.439 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:25.439 NOTE: several default settings have changed in version 5.15, please make sure 00:10:25.439 this does not affect your deployments: 00:10:25.439 - DUP for metadata (-m dup) 00:10:25.439 - enabled no-holes (-O no-holes) 00:10:25.439 - enabled free-space-tree (-R free-space-tree) 00:10:25.439 00:10:25.439 Label: (null) 00:10:25.439 UUID: db030fae-606e-4030-990b-04060f76bb8c 00:10:25.439 Node size: 16384 00:10:25.439 Sector size: 4096 00:10:25.439 Filesystem size: 510.00MiB 00:10:25.439 Block group profiles: 00:10:25.439 Data: single 8.00MiB 00:10:25.439 Metadata: DUP 32.00MiB 00:10:25.439 System: DUP 8.00MiB 00:10:25.439 SSD detected: yes 00:10:25.439 Zoned device: no 00:10:25.439 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:25.439 Runtime features: free-space-tree 00:10:25.439 Checksum: crc32c 00:10:25.439 Number of devices: 1 00:10:25.439 Devices: 00:10:25.439 ID SIZE PATH 00:10:25.439 1 510.00MiB /dev/nvme0n1p1 00:10:25.439 00:10:25.439 10:06:58 -- common/autotest_common.sh@921 -- # return 0 00:10:25.439 10:06:58 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:25.698 10:06:58 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:25.698 10:06:58 -- target/filesystem.sh@25 -- # sync 00:10:25.698 10:06:58 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:25.698 10:06:58 -- target/filesystem.sh@27 -- # sync 00:10:25.698 10:06:58 -- target/filesystem.sh@29 -- # i=0 00:10:25.698 10:06:58 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:25.698 10:06:58 -- target/filesystem.sh@37 -- # kill -0 3306896 00:10:25.698 10:06:58 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:25.698 10:06:58 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:25.698 10:06:58 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:25.698 10:06:58 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:25.698 00:10:25.698 real 0m0.807s 00:10:25.698 user 0m0.027s 00:10:25.698 sys 0m0.126s 00:10:25.698 10:06:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.698 10:06:58 -- common/autotest_common.sh@10 -- # set +x 00:10:25.698 ************************************ 00:10:25.698 END TEST filesystem_in_capsule_btrfs 00:10:25.698 ************************************ 00:10:25.698 10:06:58 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:25.698 10:06:58 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:10:25.698 10:06:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:25.698 10:06:58 -- common/autotest_common.sh@10 -- # set +x 00:10:25.698 ************************************ 00:10:25.698 START TEST filesystem_in_capsule_xfs 00:10:25.698 ************************************ 00:10:25.698 10:06:58 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:10:25.698 10:06:58 -- target/filesystem.sh@18 -- # fstype=xfs 00:10:25.698 10:06:58 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:25.698 10:06:58 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:25.698 10:06:58 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:10:25.698 10:06:58 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:10:25.698 10:06:58 -- common/autotest_common.sh@904 -- # local i=0 00:10:25.698 10:06:58 -- common/autotest_common.sh@905 -- # local force 00:10:25.698 10:06:58 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:10:25.698 10:06:58 -- common/autotest_common.sh@910 -- # force=-f 00:10:25.698 10:06:58 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:25.698 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:25.698 = sectsz=512 attr=2, projid32bit=1 00:10:25.698 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:25.698 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:25.698 data = bsize=4096 blocks=130560, imaxpct=25 00:10:25.698 = sunit=0 swidth=0 blks 00:10:25.698 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:25.698 log =internal log bsize=4096 blocks=16384, version=2 00:10:25.698 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:25.698 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:27.076 Discarding blocks...Done. 00:10:27.076 10:07:00 -- common/autotest_common.sh@921 -- # return 0 00:10:27.076 10:07:00 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:28.453 10:07:01 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:28.712 10:07:01 -- target/filesystem.sh@25 -- # sync 00:10:28.712 10:07:01 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:28.712 10:07:01 -- target/filesystem.sh@27 -- # sync 00:10:28.712 10:07:01 -- target/filesystem.sh@29 -- # i=0 00:10:28.712 10:07:01 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:28.712 10:07:01 -- target/filesystem.sh@37 -- # kill -0 3306896 00:10:28.712 10:07:01 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:28.712 10:07:01 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:28.712 10:07:01 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:28.712 10:07:01 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:28.712 00:10:28.712 real 0m2.913s 00:10:28.712 user 0m0.020s 00:10:28.712 sys 0m0.074s 00:10:28.712 10:07:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:28.712 10:07:01 -- common/autotest_common.sh@10 -- # set +x 00:10:28.712 ************************************ 00:10:28.712 END TEST filesystem_in_capsule_xfs 00:10:28.712 ************************************ 00:10:28.712 10:07:01 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:28.712 10:07:01 -- target/filesystem.sh@93 -- # sync 00:10:28.712 10:07:01 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:28.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.971 10:07:02 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:28.971 10:07:02 -- common/autotest_common.sh@1198 -- # local i=0 00:10:28.971 10:07:02 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:10:28.971 10:07:02 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.971 10:07:02 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:28.971 10:07:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.971 10:07:02 -- common/autotest_common.sh@1210 -- # return 0 00:10:28.971 10:07:02 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.971 10:07:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:28.971 10:07:02 -- common/autotest_common.sh@10 -- # set +x 00:10:28.971 10:07:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:28.971 10:07:02 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:28.971 10:07:02 -- target/filesystem.sh@101 -- # killprocess 3306896 00:10:28.971 10:07:02 -- common/autotest_common.sh@926 -- # '[' -z 3306896 ']' 00:10:28.971 10:07:02 -- common/autotest_common.sh@930 -- # kill -0 3306896 00:10:28.971 10:07:02 -- common/autotest_common.sh@931 -- # uname 00:10:28.971 10:07:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:28.971 10:07:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3306896 00:10:28.971 10:07:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:28.971 10:07:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:28.971 10:07:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3306896' 00:10:28.971 killing process with pid 3306896 00:10:28.971 10:07:02 -- common/autotest_common.sh@945 -- # kill 3306896 00:10:28.971 10:07:02 -- common/autotest_common.sh@950 -- # wait 3306896 00:10:29.230 10:07:02 -- target/filesystem.sh@102 -- # nvmfpid= 00:10:29.230 00:10:29.230 real 0m14.059s 00:10:29.230 user 0m55.061s 00:10:29.230 sys 0m1.253s 00:10:29.230 10:07:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.230 10:07:02 -- common/autotest_common.sh@10 -- # set +x 00:10:29.230 ************************************ 00:10:29.230 END TEST nvmf_filesystem_in_capsule 00:10:29.230 ************************************ 00:10:29.489 10:07:02 -- target/filesystem.sh@108 -- # nvmftestfini 00:10:29.489 10:07:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:29.489 10:07:02 -- nvmf/common.sh@116 -- # sync 00:10:29.489 10:07:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:29.489 10:07:02 -- nvmf/common.sh@119 -- # set +e 00:10:29.489 10:07:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:29.489 10:07:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:29.489 rmmod nvme_tcp 00:10:29.489 rmmod nvme_fabrics 00:10:29.489 rmmod nvme_keyring 00:10:29.489 10:07:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:29.489 10:07:02 -- nvmf/common.sh@123 -- # set -e 00:10:29.489 10:07:02 -- nvmf/common.sh@124 -- # return 0 00:10:29.489 10:07:02 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:10:29.489 10:07:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:29.489 10:07:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:29.489 10:07:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:29.489 10:07:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:29.489 10:07:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:29.489 10:07:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.489 10:07:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:29.490 10:07:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.393 10:07:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:10:31.393 00:10:31.393 real 0m35.890s 00:10:31.393 user 1m50.249s 00:10:31.393 sys 0m6.938s 00:10:31.394 10:07:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.394 10:07:04 -- common/autotest_common.sh@10 -- # set +x 00:10:31.394 ************************************ 00:10:31.394 END TEST nvmf_filesystem 00:10:31.394 ************************************ 00:10:31.652 10:07:04 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:31.652 10:07:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:31.652 10:07:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:31.652 10:07:04 -- common/autotest_common.sh@10 -- # set +x 00:10:31.652 ************************************ 00:10:31.652 START TEST nvmf_discovery 00:10:31.652 ************************************ 00:10:31.652 10:07:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:31.652 * Looking for test storage... 00:10:31.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:31.652 10:07:04 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:31.652 10:07:04 -- nvmf/common.sh@7 -- # uname -s 00:10:31.652 10:07:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:31.652 10:07:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:31.652 10:07:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:31.652 10:07:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:31.652 10:07:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:31.652 10:07:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:31.652 10:07:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:31.652 10:07:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:31.652 10:07:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:31.652 10:07:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:31.652 10:07:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:31.652 10:07:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:31.652 10:07:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:31.652 10:07:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:31.652 10:07:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:31.652 10:07:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:31.652 10:07:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.652 10:07:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.652 10:07:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.652 10:07:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.652 10:07:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.652 10:07:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.652 10:07:04 -- paths/export.sh@5 -- # export PATH 00:10:31.652 10:07:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.652 10:07:04 -- nvmf/common.sh@46 -- # : 0 00:10:31.652 10:07:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:31.652 10:07:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:31.652 10:07:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:31.652 10:07:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:31.652 10:07:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:31.652 10:07:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:31.652 10:07:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:31.652 10:07:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:31.652 10:07:04 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:31.653 10:07:04 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:31.653 10:07:04 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:31.653 10:07:04 -- target/discovery.sh@15 -- # hash nvme 00:10:31.653 10:07:04 -- target/discovery.sh@20 -- # nvmftestinit 00:10:31.653 10:07:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:31.653 10:07:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:31.653 10:07:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:31.653 10:07:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:31.653 10:07:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:31.653 10:07:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.653 10:07:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:31.653 10:07:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.653 10:07:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:10:31.653 10:07:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:10:31.653 10:07:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:10:31.653 10:07:04 -- common/autotest_common.sh@10 -- # set +x 00:10:36.926 10:07:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:36.926 10:07:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:10:36.926 10:07:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:10:36.926 10:07:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:10:36.926 10:07:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:10:36.926 10:07:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:10:36.926 10:07:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:10:36.926 10:07:10 -- nvmf/common.sh@294 -- # net_devs=() 00:10:36.926 10:07:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:10:36.926 10:07:10 -- nvmf/common.sh@295 -- # e810=() 00:10:36.926 10:07:10 -- nvmf/common.sh@295 -- # local -ga e810 00:10:36.926 10:07:10 -- nvmf/common.sh@296 -- # x722=() 00:10:36.926 10:07:10 -- nvmf/common.sh@296 -- # local -ga x722 00:10:36.926 10:07:10 -- nvmf/common.sh@297 -- # mlx=() 00:10:36.926 10:07:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:10:36.926 10:07:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:36.926 10:07:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:36.926 10:07:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:36.926 10:07:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:36.926 10:07:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:36.926 10:07:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:36.926 10:07:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:36.926 10:07:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:36.926 10:07:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:36.926 10:07:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:36.926 10:07:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:36.926 10:07:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:10:36.926 10:07:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:10:36.926 10:07:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:10:36.926 10:07:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:10:36.926 10:07:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:10:36.926 10:07:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:10:36.926 10:07:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:10:36.926 10:07:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:36.926 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:36.926 10:07:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:10:36.926 10:07:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:10:36.926 10:07:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.926 10:07:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.926 10:07:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:10:36.926 10:07:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:10:36.926 10:07:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:36.926 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:36.926 10:07:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:10:36.926 10:07:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:10:36.926 10:07:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.926 10:07:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.926 10:07:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:10:36.926 10:07:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:10:36.926 10:07:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:10:36.926 10:07:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:10:36.926 10:07:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:10:36.926 10:07:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.926 10:07:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:10:36.926 10:07:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.926 10:07:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:36.926 Found net devices under 0000:af:00.0: cvl_0_0 00:10:36.926 10:07:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.926 10:07:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:10:36.926 10:07:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.926 10:07:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:10:36.926 10:07:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.926 10:07:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:36.926 Found net devices under 0000:af:00.1: cvl_0_1 00:10:36.926 10:07:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.926 10:07:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:10:36.926 10:07:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:10:36.926 10:07:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:10:36.926 10:07:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:10:36.926 10:07:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:10:36.926 10:07:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.926 10:07:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:36.926 10:07:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:36.926 10:07:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:10:36.926 10:07:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:36.926 10:07:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:36.926 10:07:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:10:36.926 10:07:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:36.926 10:07:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.926 10:07:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:10:36.926 10:07:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:10:36.926 10:07:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:10:36.926 10:07:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:37.186 10:07:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:37.186 10:07:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:37.186 10:07:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:10:37.186 10:07:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:37.186 10:07:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:37.186 10:07:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:37.186 10:07:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:10:37.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:10:37.186 00:10:37.186 --- 10.0.0.2 ping statistics --- 00:10:37.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.186 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:10:37.186 10:07:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:37.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:10:37.186 00:10:37.186 --- 10.0.0.1 ping statistics --- 00:10:37.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.186 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:10:37.186 10:07:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.186 10:07:10 -- nvmf/common.sh@410 -- # return 0 00:10:37.186 10:07:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:37.186 10:07:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.186 10:07:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:37.186 10:07:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:37.186 10:07:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.186 10:07:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:37.186 10:07:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:37.445 10:07:10 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:37.445 10:07:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:37.445 10:07:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:37.445 10:07:10 -- common/autotest_common.sh@10 -- # set +x 00:10:37.445 10:07:10 -- nvmf/common.sh@469 -- # nvmfpid=3313314 00:10:37.445 10:07:10 -- nvmf/common.sh@470 -- # waitforlisten 3313314 00:10:37.445 10:07:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:37.445 10:07:10 -- common/autotest_common.sh@819 -- # '[' -z 3313314 ']' 00:10:37.445 10:07:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.445 10:07:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:37.445 10:07:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.445 10:07:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:37.445 10:07:10 -- common/autotest_common.sh@10 -- # set +x 00:10:37.445 [2024-04-17 10:07:10.602039] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:37.445 [2024-04-17 10:07:10.602099] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.445 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.445 [2024-04-17 10:07:10.688352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.445 [2024-04-17 10:07:10.778072] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:37.445 [2024-04-17 10:07:10.778211] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.445 [2024-04-17 10:07:10.778222] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.445 [2024-04-17 10:07:10.778236] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.445 [2024-04-17 10:07:10.778272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.445 [2024-04-17 10:07:10.778372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.445 [2024-04-17 10:07:10.778461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.445 [2024-04-17 10:07:10.778462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.381 10:07:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:38.381 10:07:11 -- common/autotest_common.sh@852 -- # return 0 00:10:38.381 10:07:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:38.381 10:07:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:38.381 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.381 10:07:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.381 10:07:11 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:38.381 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.381 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.381 [2024-04-17 10:07:11.584377] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:38.381 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.381 10:07:11 -- target/discovery.sh@26 -- # seq 1 4 00:10:38.381 10:07:11 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:38.381 10:07:11 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:38.381 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.381 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.381 Null1 00:10:38.381 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.381 10:07:11 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:38.381 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.381 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.381 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.381 10:07:11 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:38.381 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.381 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.381 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.381 10:07:11 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:38.381 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.381 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.381 [2024-04-17 10:07:11.632676] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:38.381 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.381 10:07:11 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:38.381 10:07:11 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:38.381 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.381 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.381 Null2 00:10:38.381 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.381 10:07:11 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:38.381 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.381 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.381 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.381 10:07:11 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:38.381 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.381 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.381 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.381 10:07:11 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:38.381 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.381 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.381 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.381 10:07:11 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:38.381 10:07:11 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:38.381 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.381 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.381 Null3 00:10:38.381 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.381 10:07:11 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:38.381 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.381 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.382 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.382 10:07:11 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:38.382 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.382 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.382 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.382 10:07:11 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:38.382 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.382 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.382 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.382 10:07:11 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:38.382 10:07:11 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:38.382 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.382 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.382 Null4 00:10:38.382 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.382 10:07:11 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:38.382 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.382 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.641 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.641 10:07:11 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:38.641 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.641 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.641 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.641 10:07:11 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:38.641 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.641 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.641 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.641 10:07:11 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:38.641 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.641 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.641 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.641 10:07:11 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:38.641 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.641 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.641 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.641 10:07:11 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:38.641 00:10:38.641 Discovery Log Number of Records 6, Generation counter 6 00:10:38.641 =====Discovery Log Entry 0====== 00:10:38.641 trtype: tcp 00:10:38.641 adrfam: ipv4 00:10:38.641 subtype: current discovery subsystem 00:10:38.641 treq: not required 00:10:38.641 portid: 0 00:10:38.641 trsvcid: 4420 00:10:38.641 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:38.641 traddr: 10.0.0.2 00:10:38.641 eflags: explicit discovery connections, duplicate discovery information 00:10:38.641 sectype: none 00:10:38.641 =====Discovery Log Entry 1====== 00:10:38.641 trtype: tcp 00:10:38.641 adrfam: ipv4 00:10:38.641 subtype: nvme subsystem 00:10:38.641 treq: not required 00:10:38.641 portid: 0 00:10:38.641 trsvcid: 4420 00:10:38.641 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:38.641 traddr: 10.0.0.2 00:10:38.641 eflags: none 00:10:38.641 sectype: none 00:10:38.641 =====Discovery Log Entry 2====== 00:10:38.641 trtype: tcp 00:10:38.641 adrfam: ipv4 00:10:38.641 subtype: nvme subsystem 00:10:38.641 treq: not required 00:10:38.641 portid: 0 00:10:38.641 trsvcid: 4420 00:10:38.641 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:38.641 traddr: 10.0.0.2 00:10:38.641 eflags: none 00:10:38.641 sectype: none 00:10:38.641 =====Discovery Log Entry 3====== 00:10:38.641 trtype: tcp 00:10:38.641 adrfam: ipv4 00:10:38.641 subtype: nvme subsystem 00:10:38.641 treq: not required 00:10:38.641 portid: 0 00:10:38.641 trsvcid: 4420 00:10:38.641 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:38.641 traddr: 10.0.0.2 00:10:38.641 eflags: none 00:10:38.641 sectype: none 00:10:38.641 =====Discovery Log Entry 4====== 00:10:38.641 trtype: tcp 00:10:38.641 adrfam: ipv4 00:10:38.641 subtype: nvme subsystem 00:10:38.641 treq: not required 00:10:38.641 portid: 0 00:10:38.641 trsvcid: 4420 00:10:38.641 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:38.641 traddr: 10.0.0.2 00:10:38.641 eflags: none 00:10:38.641 sectype: none 00:10:38.641 =====Discovery Log Entry 5====== 00:10:38.641 trtype: tcp 00:10:38.641 adrfam: ipv4 00:10:38.641 subtype: discovery subsystem referral 00:10:38.641 treq: not required 00:10:38.641 portid: 0 00:10:38.641 trsvcid: 4430 00:10:38.641 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:38.641 traddr: 10.0.0.2 00:10:38.641 eflags: none 00:10:38.641 sectype: none 00:10:38.641 10:07:11 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:38.641 Perform nvmf subsystem discovery via RPC 00:10:38.641 10:07:11 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:38.641 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.641 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.641 [2024-04-17 10:07:11.921586] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:10:38.641 [ 00:10:38.641 { 00:10:38.641 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:38.641 "subtype": "Discovery", 00:10:38.641 "listen_addresses": [ 00:10:38.641 { 00:10:38.641 "transport": "TCP", 00:10:38.641 "trtype": "TCP", 00:10:38.641 "adrfam": "IPv4", 00:10:38.641 "traddr": "10.0.0.2", 00:10:38.641 "trsvcid": "4420" 00:10:38.641 } 00:10:38.641 ], 00:10:38.641 "allow_any_host": true, 00:10:38.641 "hosts": [] 00:10:38.641 }, 00:10:38.641 { 00:10:38.641 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:38.641 "subtype": "NVMe", 00:10:38.641 "listen_addresses": [ 00:10:38.641 { 00:10:38.641 "transport": "TCP", 00:10:38.641 "trtype": "TCP", 00:10:38.641 "adrfam": "IPv4", 00:10:38.641 "traddr": "10.0.0.2", 00:10:38.641 "trsvcid": "4420" 00:10:38.641 } 00:10:38.641 ], 00:10:38.641 "allow_any_host": true, 00:10:38.641 "hosts": [], 00:10:38.641 "serial_number": "SPDK00000000000001", 00:10:38.641 "model_number": "SPDK bdev Controller", 00:10:38.641 "max_namespaces": 32, 00:10:38.641 "min_cntlid": 1, 00:10:38.641 "max_cntlid": 65519, 00:10:38.641 "namespaces": [ 00:10:38.641 { 00:10:38.641 "nsid": 1, 00:10:38.641 "bdev_name": "Null1", 00:10:38.641 "name": "Null1", 00:10:38.641 "nguid": "7F839CCA2BF8485BB3C7B72E98B75DAF", 00:10:38.641 "uuid": "7f839cca-2bf8-485b-b3c7-b72e98b75daf" 00:10:38.641 } 00:10:38.641 ] 00:10:38.641 }, 00:10:38.641 { 00:10:38.641 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:38.641 "subtype": "NVMe", 00:10:38.641 "listen_addresses": [ 00:10:38.641 { 00:10:38.641 "transport": "TCP", 00:10:38.641 "trtype": "TCP", 00:10:38.641 "adrfam": "IPv4", 00:10:38.641 "traddr": "10.0.0.2", 00:10:38.641 "trsvcid": "4420" 00:10:38.641 } 00:10:38.641 ], 00:10:38.641 "allow_any_host": true, 00:10:38.641 "hosts": [], 00:10:38.641 "serial_number": "SPDK00000000000002", 00:10:38.641 "model_number": "SPDK bdev Controller", 00:10:38.641 "max_namespaces": 32, 00:10:38.641 "min_cntlid": 1, 00:10:38.641 "max_cntlid": 65519, 00:10:38.641 "namespaces": [ 00:10:38.641 { 00:10:38.641 "nsid": 1, 00:10:38.641 "bdev_name": "Null2", 00:10:38.641 "name": "Null2", 00:10:38.642 "nguid": "1E4E7BCE5DE44E078A0A82C5B452DF9D", 00:10:38.642 "uuid": "1e4e7bce-5de4-4e07-8a0a-82c5b452df9d" 00:10:38.642 } 00:10:38.642 ] 00:10:38.642 }, 00:10:38.642 { 00:10:38.642 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:38.642 "subtype": "NVMe", 00:10:38.642 "listen_addresses": [ 00:10:38.642 { 00:10:38.642 "transport": "TCP", 00:10:38.642 "trtype": "TCP", 00:10:38.642 "adrfam": "IPv4", 00:10:38.642 "traddr": "10.0.0.2", 00:10:38.642 "trsvcid": "4420" 00:10:38.642 } 00:10:38.642 ], 00:10:38.642 "allow_any_host": true, 00:10:38.642 "hosts": [], 00:10:38.642 "serial_number": "SPDK00000000000003", 00:10:38.642 "model_number": "SPDK bdev Controller", 00:10:38.642 "max_namespaces": 32, 00:10:38.642 "min_cntlid": 1, 00:10:38.642 "max_cntlid": 65519, 00:10:38.642 "namespaces": [ 00:10:38.642 { 00:10:38.642 "nsid": 1, 00:10:38.642 "bdev_name": "Null3", 00:10:38.642 "name": "Null3", 00:10:38.642 "nguid": "B257CFB2B823402CB359A31860D9D989", 00:10:38.642 "uuid": "b257cfb2-b823-402c-b359-a31860d9d989" 00:10:38.642 } 00:10:38.642 ] 00:10:38.642 }, 00:10:38.642 { 00:10:38.642 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:38.642 "subtype": "NVMe", 00:10:38.642 "listen_addresses": [ 00:10:38.642 { 00:10:38.642 "transport": "TCP", 00:10:38.642 "trtype": "TCP", 00:10:38.642 "adrfam": "IPv4", 00:10:38.642 "traddr": "10.0.0.2", 00:10:38.642 "trsvcid": "4420" 00:10:38.642 } 00:10:38.642 ], 00:10:38.642 "allow_any_host": true, 00:10:38.642 "hosts": [], 00:10:38.642 "serial_number": "SPDK00000000000004", 00:10:38.642 "model_number": "SPDK bdev Controller", 00:10:38.642 "max_namespaces": 32, 00:10:38.642 "min_cntlid": 1, 00:10:38.642 "max_cntlid": 65519, 00:10:38.642 "namespaces": [ 00:10:38.642 { 00:10:38.642 "nsid": 1, 00:10:38.642 "bdev_name": "Null4", 00:10:38.642 "name": "Null4", 00:10:38.642 "nguid": "EEC68538BE934601940B595F496C9983", 00:10:38.642 "uuid": "eec68538-be93-4601-940b-595f496c9983" 00:10:38.642 } 00:10:38.642 ] 00:10:38.642 } 00:10:38.642 ] 00:10:38.642 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.642 10:07:11 -- target/discovery.sh@42 -- # seq 1 4 00:10:38.642 10:07:11 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:38.642 10:07:11 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:38.642 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.642 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.642 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.642 10:07:11 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:38.642 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.642 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.642 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.642 10:07:11 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:38.642 10:07:11 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:38.642 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.642 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.642 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.901 10:07:11 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:38.901 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.901 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.901 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.901 10:07:11 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:38.901 10:07:11 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:38.901 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.901 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.901 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.901 10:07:11 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:38.901 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.901 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.901 10:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.901 10:07:11 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:38.901 10:07:11 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:38.901 10:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.901 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:10:38.901 10:07:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.901 10:07:12 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:38.901 10:07:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.901 10:07:12 -- common/autotest_common.sh@10 -- # set +x 00:10:38.901 10:07:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.901 10:07:12 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:38.901 10:07:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.901 10:07:12 -- common/autotest_common.sh@10 -- # set +x 00:10:38.901 10:07:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.901 10:07:12 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:38.901 10:07:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:38.901 10:07:12 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:38.901 10:07:12 -- common/autotest_common.sh@10 -- # set +x 00:10:38.901 10:07:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:38.901 10:07:12 -- target/discovery.sh@49 -- # check_bdevs= 00:10:38.901 10:07:12 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:38.901 10:07:12 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:38.902 10:07:12 -- target/discovery.sh@57 -- # nvmftestfini 00:10:38.902 10:07:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:38.902 10:07:12 -- nvmf/common.sh@116 -- # sync 00:10:38.902 10:07:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:38.902 10:07:12 -- nvmf/common.sh@119 -- # set +e 00:10:38.902 10:07:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:38.902 10:07:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:38.902 rmmod nvme_tcp 00:10:38.902 rmmod nvme_fabrics 00:10:38.902 rmmod nvme_keyring 00:10:38.902 10:07:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:38.902 10:07:12 -- nvmf/common.sh@123 -- # set -e 00:10:38.902 10:07:12 -- nvmf/common.sh@124 -- # return 0 00:10:38.902 10:07:12 -- nvmf/common.sh@477 -- # '[' -n 3313314 ']' 00:10:38.902 10:07:12 -- nvmf/common.sh@478 -- # killprocess 3313314 00:10:38.902 10:07:12 -- common/autotest_common.sh@926 -- # '[' -z 3313314 ']' 00:10:38.902 10:07:12 -- common/autotest_common.sh@930 -- # kill -0 3313314 00:10:38.902 10:07:12 -- common/autotest_common.sh@931 -- # uname 00:10:38.902 10:07:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:38.902 10:07:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3313314 00:10:38.902 10:07:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:38.902 10:07:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:38.902 10:07:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3313314' 00:10:38.902 killing process with pid 3313314 00:10:38.902 10:07:12 -- common/autotest_common.sh@945 -- # kill 3313314 00:10:38.902 [2024-04-17 10:07:12.196786] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:10:38.902 10:07:12 -- common/autotest_common.sh@950 -- # wait 3313314 00:10:39.161 10:07:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:39.161 10:07:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:39.161 10:07:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:39.161 10:07:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:39.161 10:07:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:39.161 10:07:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.161 10:07:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:39.161 10:07:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.706 10:07:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:10:41.706 00:10:41.706 real 0m9.739s 00:10:41.706 user 0m8.287s 00:10:41.706 sys 0m4.706s 00:10:41.706 10:07:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:41.706 10:07:14 -- common/autotest_common.sh@10 -- # set +x 00:10:41.706 ************************************ 00:10:41.706 END TEST nvmf_discovery 00:10:41.706 ************************************ 00:10:41.706 10:07:14 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:41.706 10:07:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:41.706 10:07:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:41.706 10:07:14 -- common/autotest_common.sh@10 -- # set +x 00:10:41.706 ************************************ 00:10:41.706 START TEST nvmf_referrals 00:10:41.706 ************************************ 00:10:41.706 10:07:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:41.706 * Looking for test storage... 00:10:41.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.706 10:07:14 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:41.706 10:07:14 -- nvmf/common.sh@7 -- # uname -s 00:10:41.706 10:07:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.706 10:07:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.706 10:07:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.706 10:07:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.706 10:07:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.706 10:07:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.706 10:07:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.706 10:07:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.706 10:07:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.706 10:07:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.706 10:07:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:41.706 10:07:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:41.706 10:07:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.706 10:07:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.706 10:07:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:41.706 10:07:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:41.706 10:07:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.706 10:07:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.706 10:07:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.706 10:07:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.706 10:07:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.706 10:07:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.706 10:07:14 -- paths/export.sh@5 -- # export PATH 00:10:41.706 10:07:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.706 10:07:14 -- nvmf/common.sh@46 -- # : 0 00:10:41.706 10:07:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:41.706 10:07:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:41.706 10:07:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:41.706 10:07:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.706 10:07:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.706 10:07:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:41.706 10:07:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:41.706 10:07:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:41.706 10:07:14 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:41.706 10:07:14 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:41.706 10:07:14 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:41.706 10:07:14 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:41.706 10:07:14 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:41.706 10:07:14 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:41.706 10:07:14 -- target/referrals.sh@37 -- # nvmftestinit 00:10:41.706 10:07:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:41.706 10:07:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.706 10:07:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:41.706 10:07:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:41.706 10:07:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:41.706 10:07:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.706 10:07:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:41.706 10:07:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.706 10:07:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:10:41.706 10:07:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:10:41.706 10:07:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:10:41.706 10:07:14 -- common/autotest_common.sh@10 -- # set +x 00:10:46.982 10:07:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:46.982 10:07:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:10:46.982 10:07:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:10:46.982 10:07:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:10:46.982 10:07:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:10:46.982 10:07:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:10:46.982 10:07:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:10:46.982 10:07:20 -- nvmf/common.sh@294 -- # net_devs=() 00:10:46.982 10:07:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:10:46.982 10:07:20 -- nvmf/common.sh@295 -- # e810=() 00:10:46.982 10:07:20 -- nvmf/common.sh@295 -- # local -ga e810 00:10:46.982 10:07:20 -- nvmf/common.sh@296 -- # x722=() 00:10:46.982 10:07:20 -- nvmf/common.sh@296 -- # local -ga x722 00:10:46.982 10:07:20 -- nvmf/common.sh@297 -- # mlx=() 00:10:46.982 10:07:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:10:46.982 10:07:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:46.982 10:07:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:46.982 10:07:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:46.982 10:07:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:46.982 10:07:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:46.982 10:07:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:46.982 10:07:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:46.982 10:07:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:46.982 10:07:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:46.982 10:07:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:46.982 10:07:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:46.983 10:07:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:10:46.983 10:07:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:10:46.983 10:07:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:10:46.983 10:07:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:10:46.983 10:07:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:10:46.983 10:07:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:10:46.983 10:07:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:10:46.983 10:07:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:46.983 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:46.983 10:07:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:10:46.983 10:07:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:10:46.983 10:07:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.983 10:07:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.983 10:07:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:10:46.983 10:07:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:10:46.983 10:07:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:46.983 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:46.983 10:07:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:10:46.983 10:07:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:10:46.983 10:07:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.983 10:07:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.983 10:07:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:10:46.983 10:07:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:10:46.983 10:07:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:10:46.983 10:07:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:10:46.983 10:07:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:10:46.983 10:07:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.983 10:07:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:10:46.983 10:07:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.983 10:07:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:46.983 Found net devices under 0000:af:00.0: cvl_0_0 00:10:46.983 10:07:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.983 10:07:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:10:46.983 10:07:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.983 10:07:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:10:46.983 10:07:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.983 10:07:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:46.983 Found net devices under 0000:af:00.1: cvl_0_1 00:10:46.983 10:07:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.983 10:07:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:10:46.983 10:07:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:10:46.983 10:07:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:10:46.983 10:07:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:10:46.983 10:07:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:10:46.983 10:07:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.983 10:07:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:46.983 10:07:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:46.983 10:07:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:10:46.983 10:07:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:46.983 10:07:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:46.983 10:07:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:10:46.983 10:07:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:46.983 10:07:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.983 10:07:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:10:46.983 10:07:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:10:46.983 10:07:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:10:46.983 10:07:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:46.983 10:07:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:46.983 10:07:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:46.983 10:07:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:10:46.983 10:07:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:46.983 10:07:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:47.242 10:07:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:47.242 10:07:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:10:47.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:10:47.242 00:10:47.242 --- 10.0.0.2 ping statistics --- 00:10:47.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.242 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:10:47.242 10:07:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:47.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:10:47.242 00:10:47.243 --- 10.0.0.1 ping statistics --- 00:10:47.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.243 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:10:47.243 10:07:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.243 10:07:20 -- nvmf/common.sh@410 -- # return 0 00:10:47.243 10:07:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:47.243 10:07:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.243 10:07:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:47.243 10:07:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:47.243 10:07:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.243 10:07:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:47.243 10:07:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:47.243 10:07:20 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:47.243 10:07:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:47.243 10:07:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:47.243 10:07:20 -- common/autotest_common.sh@10 -- # set +x 00:10:47.243 10:07:20 -- nvmf/common.sh@469 -- # nvmfpid=3317286 00:10:47.243 10:07:20 -- nvmf/common.sh@470 -- # waitforlisten 3317286 00:10:47.243 10:07:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:47.243 10:07:20 -- common/autotest_common.sh@819 -- # '[' -z 3317286 ']' 00:10:47.243 10:07:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.243 10:07:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:47.243 10:07:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.243 10:07:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:47.243 10:07:20 -- common/autotest_common.sh@10 -- # set +x 00:10:47.243 [2024-04-17 10:07:20.437752] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:47.243 [2024-04-17 10:07:20.437810] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.243 EAL: No free 2048 kB hugepages reported on node 1 00:10:47.243 [2024-04-17 10:07:20.526530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:47.502 [2024-04-17 10:07:20.614689] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:47.502 [2024-04-17 10:07:20.614829] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.502 [2024-04-17 10:07:20.614839] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.502 [2024-04-17 10:07:20.614849] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.502 [2024-04-17 10:07:20.616662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.502 [2024-04-17 10:07:20.616678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.502 [2024-04-17 10:07:20.616817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:47.502 [2024-04-17 10:07:20.616818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.068 10:07:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:48.068 10:07:21 -- common/autotest_common.sh@852 -- # return 0 00:10:48.068 10:07:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:48.068 10:07:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:48.068 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:10:48.327 10:07:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.327 10:07:21 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:48.327 10:07:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:48.327 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:10:48.327 [2024-04-17 10:07:21.420474] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.327 10:07:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:48.327 10:07:21 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:48.327 10:07:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:48.327 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:10:48.327 [2024-04-17 10:07:21.436723] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:48.327 10:07:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:48.327 10:07:21 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:48.327 10:07:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:48.327 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:10:48.327 10:07:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:48.327 10:07:21 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:48.327 10:07:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:48.327 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:10:48.327 10:07:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:48.327 10:07:21 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:48.327 10:07:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:48.327 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:10:48.327 10:07:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:48.327 10:07:21 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:48.327 10:07:21 -- target/referrals.sh@48 -- # jq length 00:10:48.327 10:07:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:48.327 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:10:48.327 10:07:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:48.327 10:07:21 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:48.327 10:07:21 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:48.327 10:07:21 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:48.327 10:07:21 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:48.327 10:07:21 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:48.327 10:07:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:48.327 10:07:21 -- target/referrals.sh@21 -- # sort 00:10:48.327 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:10:48.327 10:07:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:48.327 10:07:21 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:48.327 10:07:21 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:48.327 10:07:21 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:48.327 10:07:21 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:48.327 10:07:21 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:48.327 10:07:21 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:48.327 10:07:21 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:48.327 10:07:21 -- target/referrals.sh@26 -- # sort 00:10:48.586 10:07:21 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:48.586 10:07:21 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:48.586 10:07:21 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:48.586 10:07:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:48.586 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:10:48.586 10:07:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:48.586 10:07:21 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:48.586 10:07:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:48.586 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:10:48.586 10:07:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:48.586 10:07:21 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:48.586 10:07:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:48.586 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:10:48.586 10:07:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:48.586 10:07:21 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:48.586 10:07:21 -- target/referrals.sh@56 -- # jq length 00:10:48.586 10:07:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:48.586 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:10:48.586 10:07:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:48.586 10:07:21 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:48.586 10:07:21 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:48.586 10:07:21 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:48.586 10:07:21 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:48.586 10:07:21 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:48.586 10:07:21 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:48.586 10:07:21 -- target/referrals.sh@26 -- # sort 00:10:48.586 10:07:21 -- target/referrals.sh@26 -- # echo 00:10:48.586 10:07:21 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:48.586 10:07:21 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:48.586 10:07:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:48.586 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:10:48.586 10:07:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:48.587 10:07:21 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:48.587 10:07:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:48.587 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:10:48.587 10:07:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:48.846 10:07:21 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:48.846 10:07:21 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:48.846 10:07:21 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:48.846 10:07:21 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:48.846 10:07:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:48.846 10:07:21 -- target/referrals.sh@21 -- # sort 00:10:48.846 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:10:48.846 10:07:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:48.846 10:07:21 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:48.846 10:07:21 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:48.846 10:07:21 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:48.846 10:07:21 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:48.846 10:07:21 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:48.846 10:07:21 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:48.846 10:07:21 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:48.846 10:07:21 -- target/referrals.sh@26 -- # sort 00:10:48.846 10:07:22 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:48.846 10:07:22 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:48.846 10:07:22 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:48.846 10:07:22 -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:48.846 10:07:22 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:48.846 10:07:22 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:48.847 10:07:22 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:49.106 10:07:22 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:49.106 10:07:22 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:49.106 10:07:22 -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:49.106 10:07:22 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:49.106 10:07:22 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:49.106 10:07:22 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:49.106 10:07:22 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:49.106 10:07:22 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:49.106 10:07:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:49.106 10:07:22 -- common/autotest_common.sh@10 -- # set +x 00:10:49.106 10:07:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:49.106 10:07:22 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:49.106 10:07:22 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:49.106 10:07:22 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:49.106 10:07:22 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:49.106 10:07:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:49.106 10:07:22 -- target/referrals.sh@21 -- # sort 00:10:49.106 10:07:22 -- common/autotest_common.sh@10 -- # set +x 00:10:49.106 10:07:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:49.106 10:07:22 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:49.106 10:07:22 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:49.106 10:07:22 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:49.106 10:07:22 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:49.106 10:07:22 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:49.106 10:07:22 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:49.106 10:07:22 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:49.106 10:07:22 -- target/referrals.sh@26 -- # sort 00:10:49.365 10:07:22 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:49.365 10:07:22 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:49.365 10:07:22 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:49.365 10:07:22 -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:49.365 10:07:22 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:49.365 10:07:22 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:49.365 10:07:22 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:49.623 10:07:22 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:49.623 10:07:22 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:49.623 10:07:22 -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:49.623 10:07:22 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:49.623 10:07:22 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:49.623 10:07:22 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:49.623 10:07:22 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:49.623 10:07:22 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:49.623 10:07:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:49.623 10:07:22 -- common/autotest_common.sh@10 -- # set +x 00:10:49.623 10:07:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:49.623 10:07:22 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:49.623 10:07:22 -- target/referrals.sh@82 -- # jq length 00:10:49.623 10:07:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:49.623 10:07:22 -- common/autotest_common.sh@10 -- # set +x 00:10:49.623 10:07:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:49.623 10:07:22 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:49.623 10:07:22 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:49.623 10:07:22 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:49.623 10:07:22 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:49.624 10:07:22 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:49.624 10:07:22 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:49.624 10:07:22 -- target/referrals.sh@26 -- # sort 00:10:49.882 10:07:22 -- target/referrals.sh@26 -- # echo 00:10:49.882 10:07:22 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:49.882 10:07:22 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:49.882 10:07:22 -- target/referrals.sh@86 -- # nvmftestfini 00:10:49.883 10:07:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:49.883 10:07:22 -- nvmf/common.sh@116 -- # sync 00:10:49.883 10:07:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:49.883 10:07:22 -- nvmf/common.sh@119 -- # set +e 00:10:49.883 10:07:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:49.883 10:07:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:49.883 rmmod nvme_tcp 00:10:49.883 rmmod nvme_fabrics 00:10:49.883 rmmod nvme_keyring 00:10:49.883 10:07:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:49.883 10:07:23 -- nvmf/common.sh@123 -- # set -e 00:10:49.883 10:07:23 -- nvmf/common.sh@124 -- # return 0 00:10:49.883 10:07:23 -- nvmf/common.sh@477 -- # '[' -n 3317286 ']' 00:10:49.883 10:07:23 -- nvmf/common.sh@478 -- # killprocess 3317286 00:10:49.883 10:07:23 -- common/autotest_common.sh@926 -- # '[' -z 3317286 ']' 00:10:49.883 10:07:23 -- common/autotest_common.sh@930 -- # kill -0 3317286 00:10:49.883 10:07:23 -- common/autotest_common.sh@931 -- # uname 00:10:49.883 10:07:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:49.883 10:07:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3317286 00:10:49.883 10:07:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:49.883 10:07:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:49.883 10:07:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3317286' 00:10:49.883 killing process with pid 3317286 00:10:49.883 10:07:23 -- common/autotest_common.sh@945 -- # kill 3317286 00:10:49.883 10:07:23 -- common/autotest_common.sh@950 -- # wait 3317286 00:10:50.142 10:07:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:50.142 10:07:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:50.142 10:07:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:50.142 10:07:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:50.142 10:07:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:50.142 10:07:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.142 10:07:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:50.142 10:07:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.046 10:07:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:10:52.305 00:10:52.305 real 0m10.856s 00:10:52.305 user 0m13.179s 00:10:52.305 sys 0m5.095s 00:10:52.305 10:07:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.305 10:07:25 -- common/autotest_common.sh@10 -- # set +x 00:10:52.305 ************************************ 00:10:52.305 END TEST nvmf_referrals 00:10:52.305 ************************************ 00:10:52.305 10:07:25 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:52.305 10:07:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:52.305 10:07:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:52.305 10:07:25 -- common/autotest_common.sh@10 -- # set +x 00:10:52.305 ************************************ 00:10:52.305 START TEST nvmf_connect_disconnect 00:10:52.305 ************************************ 00:10:52.305 10:07:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:52.305 * Looking for test storage... 00:10:52.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.305 10:07:25 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:52.305 10:07:25 -- nvmf/common.sh@7 -- # uname -s 00:10:52.305 10:07:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:52.305 10:07:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:52.305 10:07:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:52.305 10:07:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:52.305 10:07:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:52.305 10:07:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:52.305 10:07:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:52.305 10:07:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:52.305 10:07:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:52.305 10:07:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:52.305 10:07:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:52.305 10:07:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:52.305 10:07:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:52.305 10:07:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:52.305 10:07:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:52.305 10:07:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:52.305 10:07:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.305 10:07:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.305 10:07:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.305 10:07:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.305 10:07:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.306 10:07:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.306 10:07:25 -- paths/export.sh@5 -- # export PATH 00:10:52.306 10:07:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.306 10:07:25 -- nvmf/common.sh@46 -- # : 0 00:10:52.306 10:07:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:52.306 10:07:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:52.306 10:07:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:52.306 10:07:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:52.306 10:07:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:52.306 10:07:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:52.306 10:07:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:52.306 10:07:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:52.306 10:07:25 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:52.306 10:07:25 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:52.306 10:07:25 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:52.306 10:07:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:52.306 10:07:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:52.306 10:07:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:52.306 10:07:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:52.306 10:07:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:52.306 10:07:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.306 10:07:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:52.306 10:07:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.306 10:07:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:10:52.306 10:07:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:10:52.306 10:07:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:10:52.306 10:07:25 -- common/autotest_common.sh@10 -- # set +x 00:10:58.875 10:07:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:58.875 10:07:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:10:58.875 10:07:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:10:58.875 10:07:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:10:58.875 10:07:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:10:58.875 10:07:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:10:58.875 10:07:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:10:58.875 10:07:31 -- nvmf/common.sh@294 -- # net_devs=() 00:10:58.875 10:07:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:10:58.875 10:07:31 -- nvmf/common.sh@295 -- # e810=() 00:10:58.876 10:07:31 -- nvmf/common.sh@295 -- # local -ga e810 00:10:58.876 10:07:31 -- nvmf/common.sh@296 -- # x722=() 00:10:58.876 10:07:31 -- nvmf/common.sh@296 -- # local -ga x722 00:10:58.876 10:07:31 -- nvmf/common.sh@297 -- # mlx=() 00:10:58.876 10:07:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:10:58.876 10:07:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:58.876 10:07:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:58.876 10:07:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:58.876 10:07:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:58.876 10:07:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:58.876 10:07:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:58.876 10:07:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:58.876 10:07:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:58.876 10:07:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:58.876 10:07:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:58.876 10:07:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:58.876 10:07:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:10:58.876 10:07:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:10:58.876 10:07:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:10:58.876 10:07:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:10:58.876 10:07:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:10:58.876 10:07:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:10:58.876 10:07:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:10:58.876 10:07:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:58.876 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:58.876 10:07:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:10:58.876 10:07:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:10:58.876 10:07:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.876 10:07:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.876 10:07:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:10:58.876 10:07:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:10:58.876 10:07:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:58.876 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:58.876 10:07:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:10:58.876 10:07:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:10:58.876 10:07:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.876 10:07:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.876 10:07:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:10:58.876 10:07:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:10:58.876 10:07:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:10:58.876 10:07:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:10:58.876 10:07:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:10:58.876 10:07:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.876 10:07:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:10:58.876 10:07:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.876 10:07:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:58.876 Found net devices under 0000:af:00.0: cvl_0_0 00:10:58.876 10:07:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.876 10:07:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:10:58.876 10:07:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.876 10:07:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:10:58.876 10:07:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.876 10:07:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:58.876 Found net devices under 0000:af:00.1: cvl_0_1 00:10:58.876 10:07:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.876 10:07:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:10:58.876 10:07:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:10:58.876 10:07:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:10:58.876 10:07:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:10:58.876 10:07:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:10:58.876 10:07:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:58.876 10:07:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:58.876 10:07:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:58.876 10:07:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:10:58.876 10:07:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:58.876 10:07:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:58.876 10:07:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:10:58.876 10:07:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:58.876 10:07:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:58.876 10:07:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:10:58.876 10:07:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:10:58.876 10:07:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:10:58.876 10:07:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:58.876 10:07:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:58.876 10:07:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:58.876 10:07:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:10:58.876 10:07:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:58.876 10:07:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:58.876 10:07:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:58.876 10:07:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:10:58.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:58.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:10:58.876 00:10:58.876 --- 10.0.0.2 ping statistics --- 00:10:58.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.876 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:10:58.876 10:07:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:58.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:58.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:10:58.876 00:10:58.876 --- 10.0.0.1 ping statistics --- 00:10:58.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.876 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:10:58.876 10:07:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:58.876 10:07:31 -- nvmf/common.sh@410 -- # return 0 00:10:58.876 10:07:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:58.876 10:07:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:58.876 10:07:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:58.876 10:07:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:58.876 10:07:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:58.876 10:07:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:58.876 10:07:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:58.876 10:07:31 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:58.876 10:07:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:58.876 10:07:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:58.876 10:07:31 -- common/autotest_common.sh@10 -- # set +x 00:10:58.876 10:07:31 -- nvmf/common.sh@469 -- # nvmfpid=3321645 00:10:58.876 10:07:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:58.876 10:07:31 -- nvmf/common.sh@470 -- # waitforlisten 3321645 00:10:58.876 10:07:31 -- common/autotest_common.sh@819 -- # '[' -z 3321645 ']' 00:10:58.876 10:07:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.876 10:07:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:58.876 10:07:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.876 10:07:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:58.876 10:07:31 -- common/autotest_common.sh@10 -- # set +x 00:10:58.876 [2024-04-17 10:07:31.376751] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:58.876 [2024-04-17 10:07:31.376805] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.876 EAL: No free 2048 kB hugepages reported on node 1 00:10:58.876 [2024-04-17 10:07:31.463508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:58.876 [2024-04-17 10:07:31.551621] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:58.876 [2024-04-17 10:07:31.551774] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:58.876 [2024-04-17 10:07:31.551786] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:58.876 [2024-04-17 10:07:31.551795] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:58.876 [2024-04-17 10:07:31.551846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.876 [2024-04-17 10:07:31.551965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.876 [2024-04-17 10:07:31.552069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.876 [2024-04-17 10:07:31.552069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:59.136 10:07:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:59.136 10:07:32 -- common/autotest_common.sh@852 -- # return 0 00:10:59.136 10:07:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:59.136 10:07:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:59.136 10:07:32 -- common/autotest_common.sh@10 -- # set +x 00:10:59.136 10:07:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:59.136 10:07:32 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:59.136 10:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:59.136 10:07:32 -- common/autotest_common.sh@10 -- # set +x 00:10:59.136 [2024-04-17 10:07:32.357467] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:59.136 10:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:59.136 10:07:32 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:59.136 10:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:59.136 10:07:32 -- common/autotest_common.sh@10 -- # set +x 00:10:59.136 10:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:59.136 10:07:32 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:59.136 10:07:32 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:59.136 10:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:59.136 10:07:32 -- common/autotest_common.sh@10 -- # set +x 00:10:59.136 10:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:59.136 10:07:32 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:59.136 10:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:59.136 10:07:32 -- common/autotest_common.sh@10 -- # set +x 00:10:59.136 10:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:59.136 10:07:32 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.136 10:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:59.136 10:07:32 -- common/autotest_common.sh@10 -- # set +x 00:10:59.136 [2024-04-17 10:07:32.413426] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.136 10:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:59.136 10:07:32 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:10:59.136 10:07:32 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:10:59.136 10:07:32 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:10:59.136 10:07:32 -- target/connect_disconnect.sh@34 -- # set +x 00:11:01.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.209 10:11:23 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:50.209 10:11:23 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:50.209 10:11:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:50.209 10:11:23 -- nvmf/common.sh@116 -- # sync 00:14:50.209 10:11:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:50.209 10:11:23 -- nvmf/common.sh@119 -- # set +e 00:14:50.209 10:11:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:50.209 10:11:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:50.209 rmmod nvme_tcp 00:14:50.209 rmmod nvme_fabrics 00:14:50.209 rmmod nvme_keyring 00:14:50.209 10:11:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:50.209 10:11:23 -- nvmf/common.sh@123 -- # set -e 00:14:50.209 10:11:23 -- nvmf/common.sh@124 -- # return 0 00:14:50.209 10:11:23 -- nvmf/common.sh@477 -- # '[' -n 3321645 ']' 00:14:50.209 10:11:23 -- nvmf/common.sh@478 -- # killprocess 3321645 00:14:50.209 10:11:23 -- common/autotest_common.sh@926 -- # '[' -z 3321645 ']' 00:14:50.209 10:11:23 -- common/autotest_common.sh@930 -- # kill -0 3321645 00:14:50.209 10:11:23 -- common/autotest_common.sh@931 -- # uname 00:14:50.209 10:11:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:50.209 10:11:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3321645 00:14:50.209 10:11:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:50.209 10:11:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:50.209 10:11:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3321645' 00:14:50.209 killing process with pid 3321645 00:14:50.209 10:11:23 -- common/autotest_common.sh@945 -- # kill 3321645 00:14:50.209 10:11:23 -- common/autotest_common.sh@950 -- # wait 3321645 00:14:50.209 10:11:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:50.209 10:11:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:50.209 10:11:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:50.209 10:11:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:50.209 10:11:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:50.209 10:11:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.209 10:11:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:50.209 10:11:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.743 10:11:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:52.743 00:14:52.743 real 4m0.039s 00:14:52.743 user 15m19.389s 00:14:52.743 sys 0m21.111s 00:14:52.743 10:11:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:52.743 10:11:25 -- common/autotest_common.sh@10 -- # set +x 00:14:52.743 ************************************ 00:14:52.743 END TEST nvmf_connect_disconnect 00:14:52.743 ************************************ 00:14:52.743 10:11:25 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:52.743 10:11:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:52.743 10:11:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:52.743 10:11:25 -- common/autotest_common.sh@10 -- # set +x 00:14:52.743 ************************************ 00:14:52.743 START TEST nvmf_multitarget 00:14:52.743 ************************************ 00:14:52.743 10:11:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:52.743 * Looking for test storage... 00:14:52.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:52.743 10:11:25 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:52.743 10:11:25 -- nvmf/common.sh@7 -- # uname -s 00:14:52.743 10:11:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.743 10:11:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.743 10:11:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.743 10:11:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.743 10:11:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.743 10:11:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.743 10:11:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.743 10:11:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.743 10:11:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.743 10:11:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.743 10:11:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:52.743 10:11:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:52.743 10:11:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.743 10:11:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.743 10:11:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:52.743 10:11:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:52.743 10:11:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.743 10:11:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.743 10:11:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.743 10:11:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.743 10:11:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.743 10:11:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.743 10:11:25 -- paths/export.sh@5 -- # export PATH 00:14:52.743 10:11:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.743 10:11:25 -- nvmf/common.sh@46 -- # : 0 00:14:52.743 10:11:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:52.743 10:11:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:52.743 10:11:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:52.743 10:11:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.743 10:11:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.743 10:11:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:52.743 10:11:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:52.743 10:11:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:52.743 10:11:25 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:52.743 10:11:25 -- target/multitarget.sh@15 -- # nvmftestinit 00:14:52.743 10:11:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:52.743 10:11:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.743 10:11:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:52.743 10:11:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:52.743 10:11:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:52.743 10:11:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.743 10:11:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.743 10:11:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.743 10:11:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:52.743 10:11:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:52.743 10:11:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:52.743 10:11:25 -- common/autotest_common.sh@10 -- # set +x 00:14:58.011 10:11:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:58.011 10:11:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:58.011 10:11:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:58.011 10:11:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:58.011 10:11:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:58.011 10:11:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:58.011 10:11:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:58.011 10:11:31 -- nvmf/common.sh@294 -- # net_devs=() 00:14:58.011 10:11:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:58.011 10:11:31 -- nvmf/common.sh@295 -- # e810=() 00:14:58.011 10:11:31 -- nvmf/common.sh@295 -- # local -ga e810 00:14:58.011 10:11:31 -- nvmf/common.sh@296 -- # x722=() 00:14:58.011 10:11:31 -- nvmf/common.sh@296 -- # local -ga x722 00:14:58.011 10:11:31 -- nvmf/common.sh@297 -- # mlx=() 00:14:58.011 10:11:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:58.011 10:11:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:58.011 10:11:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:58.011 10:11:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:58.011 10:11:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:58.011 10:11:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:58.011 10:11:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:58.011 10:11:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:58.011 10:11:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:58.011 10:11:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:58.011 10:11:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:58.011 10:11:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:58.011 10:11:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:58.011 10:11:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:58.011 10:11:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:58.011 10:11:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:58.011 10:11:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:58.011 10:11:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:58.011 10:11:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:58.011 10:11:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:58.011 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:58.011 10:11:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:58.011 10:11:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:58.011 10:11:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.011 10:11:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.011 10:11:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:58.011 10:11:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:58.011 10:11:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:58.011 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:58.011 10:11:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:58.011 10:11:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:58.011 10:11:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.011 10:11:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.011 10:11:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:58.011 10:11:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:58.011 10:11:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:58.011 10:11:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:58.011 10:11:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:58.011 10:11:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.011 10:11:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:58.011 10:11:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.011 10:11:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:58.011 Found net devices under 0000:af:00.0: cvl_0_0 00:14:58.011 10:11:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.011 10:11:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:58.011 10:11:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.011 10:11:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:58.011 10:11:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.011 10:11:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:58.011 Found net devices under 0000:af:00.1: cvl_0_1 00:14:58.011 10:11:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.011 10:11:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:58.011 10:11:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:58.011 10:11:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:58.011 10:11:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:58.011 10:11:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:58.011 10:11:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:58.011 10:11:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:58.011 10:11:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:58.011 10:11:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:58.011 10:11:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:58.011 10:11:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:58.011 10:11:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:58.011 10:11:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:58.011 10:11:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:58.011 10:11:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:58.011 10:11:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:58.011 10:11:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:58.011 10:11:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:58.011 10:11:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:58.011 10:11:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:58.011 10:11:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:58.011 10:11:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:58.270 10:11:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:58.270 10:11:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:58.270 10:11:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:58.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:58.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:14:58.270 00:14:58.270 --- 10.0.0.2 ping statistics --- 00:14:58.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.270 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:14:58.270 10:11:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:58.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:58.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:14:58.270 00:14:58.270 --- 10.0.0.1 ping statistics --- 00:14:58.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.270 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:14:58.270 10:11:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:58.270 10:11:31 -- nvmf/common.sh@410 -- # return 0 00:14:58.270 10:11:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:58.270 10:11:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:58.270 10:11:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:58.270 10:11:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:58.270 10:11:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:58.270 10:11:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:58.270 10:11:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:58.270 10:11:31 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:58.270 10:11:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:58.270 10:11:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:58.270 10:11:31 -- common/autotest_common.sh@10 -- # set +x 00:14:58.270 10:11:31 -- nvmf/common.sh@469 -- # nvmfpid=3369478 00:14:58.270 10:11:31 -- nvmf/common.sh@470 -- # waitforlisten 3369478 00:14:58.270 10:11:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:58.270 10:11:31 -- common/autotest_common.sh@819 -- # '[' -z 3369478 ']' 00:14:58.270 10:11:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.270 10:11:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:58.270 10:11:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.270 10:11:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:58.270 10:11:31 -- common/autotest_common.sh@10 -- # set +x 00:14:58.270 [2024-04-17 10:11:31.484678] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:58.270 [2024-04-17 10:11:31.484735] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.270 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.270 [2024-04-17 10:11:31.575432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:58.527 [2024-04-17 10:11:31.664171] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:58.527 [2024-04-17 10:11:31.664314] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.527 [2024-04-17 10:11:31.664326] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.527 [2024-04-17 10:11:31.664336] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.527 [2024-04-17 10:11:31.664387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.527 [2024-04-17 10:11:31.664499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.527 [2024-04-17 10:11:31.664612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.527 [2024-04-17 10:11:31.664612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:59.093 10:11:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:59.093 10:11:32 -- common/autotest_common.sh@852 -- # return 0 00:14:59.093 10:11:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:59.093 10:11:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:59.093 10:11:32 -- common/autotest_common.sh@10 -- # set +x 00:14:59.093 10:11:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.093 10:11:32 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:59.093 10:11:32 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:59.093 10:11:32 -- target/multitarget.sh@21 -- # jq length 00:14:59.352 10:11:32 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:59.352 10:11:32 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:59.352 "nvmf_tgt_1" 00:14:59.352 10:11:32 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:59.610 "nvmf_tgt_2" 00:14:59.610 10:11:32 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:59.610 10:11:32 -- target/multitarget.sh@28 -- # jq length 00:14:59.610 10:11:32 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:59.610 10:11:32 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:59.868 true 00:14:59.868 10:11:33 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:59.868 true 00:14:59.868 10:11:33 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:59.868 10:11:33 -- target/multitarget.sh@35 -- # jq length 00:15:00.126 10:11:33 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:00.126 10:11:33 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:00.126 10:11:33 -- target/multitarget.sh@41 -- # nvmftestfini 00:15:00.126 10:11:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:00.126 10:11:33 -- nvmf/common.sh@116 -- # sync 00:15:00.126 10:11:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:00.126 10:11:33 -- nvmf/common.sh@119 -- # set +e 00:15:00.126 10:11:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:00.126 10:11:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:00.126 rmmod nvme_tcp 00:15:00.126 rmmod nvme_fabrics 00:15:00.126 rmmod nvme_keyring 00:15:00.126 10:11:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:00.126 10:11:33 -- nvmf/common.sh@123 -- # set -e 00:15:00.126 10:11:33 -- nvmf/common.sh@124 -- # return 0 00:15:00.126 10:11:33 -- nvmf/common.sh@477 -- # '[' -n 3369478 ']' 00:15:00.126 10:11:33 -- nvmf/common.sh@478 -- # killprocess 3369478 00:15:00.126 10:11:33 -- common/autotest_common.sh@926 -- # '[' -z 3369478 ']' 00:15:00.126 10:11:33 -- common/autotest_common.sh@930 -- # kill -0 3369478 00:15:00.126 10:11:33 -- common/autotest_common.sh@931 -- # uname 00:15:00.126 10:11:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:00.126 10:11:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3369478 00:15:00.126 10:11:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:00.126 10:11:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:00.126 10:11:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3369478' 00:15:00.126 killing process with pid 3369478 00:15:00.126 10:11:33 -- common/autotest_common.sh@945 -- # kill 3369478 00:15:00.126 10:11:33 -- common/autotest_common.sh@950 -- # wait 3369478 00:15:00.385 10:11:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:00.385 10:11:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:00.385 10:11:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:00.385 10:11:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:00.385 10:11:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:00.385 10:11:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.385 10:11:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.385 10:11:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.918 10:11:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:02.918 00:15:02.918 real 0m10.190s 00:15:02.918 user 0m10.265s 00:15:02.918 sys 0m4.910s 00:15:02.918 10:11:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:02.918 10:11:35 -- common/autotest_common.sh@10 -- # set +x 00:15:02.918 ************************************ 00:15:02.918 END TEST nvmf_multitarget 00:15:02.918 ************************************ 00:15:02.918 10:11:35 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:02.918 10:11:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:02.918 10:11:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:02.918 10:11:35 -- common/autotest_common.sh@10 -- # set +x 00:15:02.918 ************************************ 00:15:02.918 START TEST nvmf_rpc 00:15:02.918 ************************************ 00:15:02.919 10:11:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:02.919 * Looking for test storage... 00:15:02.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:02.919 10:11:35 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:02.919 10:11:35 -- nvmf/common.sh@7 -- # uname -s 00:15:02.919 10:11:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.919 10:11:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.919 10:11:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.919 10:11:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.919 10:11:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.919 10:11:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.919 10:11:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.919 10:11:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.919 10:11:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.919 10:11:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.919 10:11:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:02.919 10:11:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:02.919 10:11:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.919 10:11:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.919 10:11:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:02.919 10:11:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:02.919 10:11:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.919 10:11:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.919 10:11:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.919 10:11:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.919 10:11:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.919 10:11:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.919 10:11:35 -- paths/export.sh@5 -- # export PATH 00:15:02.919 10:11:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.919 10:11:35 -- nvmf/common.sh@46 -- # : 0 00:15:02.919 10:11:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:02.919 10:11:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:02.919 10:11:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:02.919 10:11:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.919 10:11:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.919 10:11:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:02.919 10:11:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:02.919 10:11:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:02.919 10:11:35 -- target/rpc.sh@11 -- # loops=5 00:15:02.919 10:11:35 -- target/rpc.sh@23 -- # nvmftestinit 00:15:02.919 10:11:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:02.919 10:11:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.919 10:11:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:02.919 10:11:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:02.919 10:11:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:02.919 10:11:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.919 10:11:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.919 10:11:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.919 10:11:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:02.919 10:11:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:02.919 10:11:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:02.919 10:11:35 -- common/autotest_common.sh@10 -- # set +x 00:15:08.191 10:11:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:08.191 10:11:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:08.191 10:11:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:08.191 10:11:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:08.191 10:11:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:08.191 10:11:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:08.191 10:11:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:08.191 10:11:41 -- nvmf/common.sh@294 -- # net_devs=() 00:15:08.191 10:11:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:08.191 10:11:41 -- nvmf/common.sh@295 -- # e810=() 00:15:08.191 10:11:41 -- nvmf/common.sh@295 -- # local -ga e810 00:15:08.191 10:11:41 -- nvmf/common.sh@296 -- # x722=() 00:15:08.191 10:11:41 -- nvmf/common.sh@296 -- # local -ga x722 00:15:08.191 10:11:41 -- nvmf/common.sh@297 -- # mlx=() 00:15:08.191 10:11:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:08.191 10:11:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:08.191 10:11:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:08.191 10:11:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:08.191 10:11:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:08.191 10:11:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:08.191 10:11:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:08.191 10:11:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:08.191 10:11:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:08.191 10:11:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:08.191 10:11:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:08.191 10:11:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:08.191 10:11:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:08.191 10:11:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:08.191 10:11:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:08.191 10:11:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:08.191 10:11:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:08.191 10:11:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:08.191 10:11:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:08.191 10:11:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:08.191 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:08.191 10:11:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:08.191 10:11:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:08.191 10:11:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:08.191 10:11:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:08.191 10:11:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:08.191 10:11:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:08.191 10:11:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:08.191 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:08.191 10:11:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:08.191 10:11:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:08.191 10:11:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:08.191 10:11:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:08.191 10:11:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:08.191 10:11:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:08.191 10:11:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:08.191 10:11:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:08.191 10:11:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:08.191 10:11:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.191 10:11:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:08.191 10:11:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.191 10:11:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:08.191 Found net devices under 0000:af:00.0: cvl_0_0 00:15:08.191 10:11:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.191 10:11:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:08.191 10:11:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.191 10:11:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:08.191 10:11:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.191 10:11:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:08.191 Found net devices under 0000:af:00.1: cvl_0_1 00:15:08.191 10:11:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.191 10:11:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:08.191 10:11:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:08.191 10:11:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:08.191 10:11:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:08.191 10:11:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:08.191 10:11:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:08.191 10:11:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:08.191 10:11:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:08.191 10:11:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:08.191 10:11:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:08.191 10:11:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:08.191 10:11:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:08.191 10:11:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:08.191 10:11:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:08.191 10:11:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:08.191 10:11:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:08.191 10:11:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:08.191 10:11:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:08.191 10:11:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:08.191 10:11:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:08.191 10:11:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:08.191 10:11:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:08.191 10:11:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:08.191 10:11:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:08.191 10:11:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:08.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:08.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:15:08.191 00:15:08.191 --- 10.0.0.2 ping statistics --- 00:15:08.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.191 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:15:08.191 10:11:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:08.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:08.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:15:08.191 00:15:08.191 --- 10.0.0.1 ping statistics --- 00:15:08.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.191 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:15:08.191 10:11:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:08.191 10:11:41 -- nvmf/common.sh@410 -- # return 0 00:15:08.191 10:11:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:08.191 10:11:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:08.191 10:11:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:08.192 10:11:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:08.192 10:11:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:08.192 10:11:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:08.192 10:11:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:08.192 10:11:41 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:08.192 10:11:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:08.192 10:11:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:08.192 10:11:41 -- common/autotest_common.sh@10 -- # set +x 00:15:08.192 10:11:41 -- nvmf/common.sh@469 -- # nvmfpid=3373516 00:15:08.192 10:11:41 -- nvmf/common.sh@470 -- # waitforlisten 3373516 00:15:08.192 10:11:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:08.192 10:11:41 -- common/autotest_common.sh@819 -- # '[' -z 3373516 ']' 00:15:08.192 10:11:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.192 10:11:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:08.192 10:11:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.192 10:11:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:08.192 10:11:41 -- common/autotest_common.sh@10 -- # set +x 00:15:08.451 [2024-04-17 10:11:41.548913] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:08.451 [2024-04-17 10:11:41.548968] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.451 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.451 [2024-04-17 10:11:41.633762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:08.451 [2024-04-17 10:11:41.722111] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:08.451 [2024-04-17 10:11:41.722254] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.451 [2024-04-17 10:11:41.722265] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.451 [2024-04-17 10:11:41.722275] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.451 [2024-04-17 10:11:41.722323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.451 [2024-04-17 10:11:41.722422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.451 [2024-04-17 10:11:41.722539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.451 [2024-04-17 10:11:41.722539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:09.385 10:11:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:09.385 10:11:42 -- common/autotest_common.sh@852 -- # return 0 00:15:09.385 10:11:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:09.385 10:11:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:09.385 10:11:42 -- common/autotest_common.sh@10 -- # set +x 00:15:09.385 10:11:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:09.385 10:11:42 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:09.385 10:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.385 10:11:42 -- common/autotest_common.sh@10 -- # set +x 00:15:09.385 10:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.385 10:11:42 -- target/rpc.sh@26 -- # stats='{ 00:15:09.385 "tick_rate": 2200000000, 00:15:09.385 "poll_groups": [ 00:15:09.385 { 00:15:09.385 "name": "nvmf_tgt_poll_group_0", 00:15:09.385 "admin_qpairs": 0, 00:15:09.385 "io_qpairs": 0, 00:15:09.385 "current_admin_qpairs": 0, 00:15:09.385 "current_io_qpairs": 0, 00:15:09.385 "pending_bdev_io": 0, 00:15:09.385 "completed_nvme_io": 0, 00:15:09.385 "transports": [] 00:15:09.385 }, 00:15:09.385 { 00:15:09.385 "name": "nvmf_tgt_poll_group_1", 00:15:09.385 "admin_qpairs": 0, 00:15:09.385 "io_qpairs": 0, 00:15:09.385 "current_admin_qpairs": 0, 00:15:09.385 "current_io_qpairs": 0, 00:15:09.385 "pending_bdev_io": 0, 00:15:09.385 "completed_nvme_io": 0, 00:15:09.385 "transports": [] 00:15:09.385 }, 00:15:09.385 { 00:15:09.385 "name": "nvmf_tgt_poll_group_2", 00:15:09.385 "admin_qpairs": 0, 00:15:09.385 "io_qpairs": 0, 00:15:09.385 "current_admin_qpairs": 0, 00:15:09.385 "current_io_qpairs": 0, 00:15:09.385 "pending_bdev_io": 0, 00:15:09.385 "completed_nvme_io": 0, 00:15:09.385 "transports": [] 00:15:09.385 }, 00:15:09.385 { 00:15:09.385 "name": "nvmf_tgt_poll_group_3", 00:15:09.385 "admin_qpairs": 0, 00:15:09.385 "io_qpairs": 0, 00:15:09.385 "current_admin_qpairs": 0, 00:15:09.385 "current_io_qpairs": 0, 00:15:09.385 "pending_bdev_io": 0, 00:15:09.385 "completed_nvme_io": 0, 00:15:09.385 "transports": [] 00:15:09.385 } 00:15:09.385 ] 00:15:09.385 }' 00:15:09.385 10:11:42 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:09.385 10:11:42 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:09.385 10:11:42 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:09.385 10:11:42 -- target/rpc.sh@15 -- # wc -l 00:15:09.385 10:11:42 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:09.385 10:11:42 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:09.385 10:11:42 -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:09.385 10:11:42 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:09.385 10:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.385 10:11:42 -- common/autotest_common.sh@10 -- # set +x 00:15:09.385 [2024-04-17 10:11:42.653908] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:09.385 10:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.385 10:11:42 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:09.385 10:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.385 10:11:42 -- common/autotest_common.sh@10 -- # set +x 00:15:09.385 10:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.385 10:11:42 -- target/rpc.sh@33 -- # stats='{ 00:15:09.385 "tick_rate": 2200000000, 00:15:09.385 "poll_groups": [ 00:15:09.385 { 00:15:09.385 "name": "nvmf_tgt_poll_group_0", 00:15:09.385 "admin_qpairs": 0, 00:15:09.385 "io_qpairs": 0, 00:15:09.385 "current_admin_qpairs": 0, 00:15:09.385 "current_io_qpairs": 0, 00:15:09.385 "pending_bdev_io": 0, 00:15:09.385 "completed_nvme_io": 0, 00:15:09.385 "transports": [ 00:15:09.385 { 00:15:09.385 "trtype": "TCP" 00:15:09.385 } 00:15:09.385 ] 00:15:09.385 }, 00:15:09.385 { 00:15:09.385 "name": "nvmf_tgt_poll_group_1", 00:15:09.385 "admin_qpairs": 0, 00:15:09.385 "io_qpairs": 0, 00:15:09.385 "current_admin_qpairs": 0, 00:15:09.385 "current_io_qpairs": 0, 00:15:09.385 "pending_bdev_io": 0, 00:15:09.385 "completed_nvme_io": 0, 00:15:09.385 "transports": [ 00:15:09.385 { 00:15:09.385 "trtype": "TCP" 00:15:09.385 } 00:15:09.385 ] 00:15:09.385 }, 00:15:09.385 { 00:15:09.385 "name": "nvmf_tgt_poll_group_2", 00:15:09.385 "admin_qpairs": 0, 00:15:09.385 "io_qpairs": 0, 00:15:09.385 "current_admin_qpairs": 0, 00:15:09.385 "current_io_qpairs": 0, 00:15:09.385 "pending_bdev_io": 0, 00:15:09.385 "completed_nvme_io": 0, 00:15:09.385 "transports": [ 00:15:09.385 { 00:15:09.385 "trtype": "TCP" 00:15:09.385 } 00:15:09.385 ] 00:15:09.385 }, 00:15:09.385 { 00:15:09.385 "name": "nvmf_tgt_poll_group_3", 00:15:09.385 "admin_qpairs": 0, 00:15:09.385 "io_qpairs": 0, 00:15:09.385 "current_admin_qpairs": 0, 00:15:09.385 "current_io_qpairs": 0, 00:15:09.385 "pending_bdev_io": 0, 00:15:09.385 "completed_nvme_io": 0, 00:15:09.385 "transports": [ 00:15:09.385 { 00:15:09.385 "trtype": "TCP" 00:15:09.385 } 00:15:09.385 ] 00:15:09.385 } 00:15:09.385 ] 00:15:09.385 }' 00:15:09.385 10:11:42 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:09.385 10:11:42 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:09.385 10:11:42 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:09.385 10:11:42 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:09.643 10:11:42 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:09.643 10:11:42 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:09.643 10:11:42 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:09.643 10:11:42 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:09.643 10:11:42 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:09.643 10:11:42 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:09.643 10:11:42 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:09.643 10:11:42 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:09.643 10:11:42 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:09.643 10:11:42 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:09.643 10:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.643 10:11:42 -- common/autotest_common.sh@10 -- # set +x 00:15:09.643 Malloc1 00:15:09.643 10:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.643 10:11:42 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:09.643 10:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.643 10:11:42 -- common/autotest_common.sh@10 -- # set +x 00:15:09.643 10:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.643 10:11:42 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:09.643 10:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.643 10:11:42 -- common/autotest_common.sh@10 -- # set +x 00:15:09.643 10:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.643 10:11:42 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:09.643 10:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.643 10:11:42 -- common/autotest_common.sh@10 -- # set +x 00:15:09.643 10:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.643 10:11:42 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.643 10:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.643 10:11:42 -- common/autotest_common.sh@10 -- # set +x 00:15:09.643 [2024-04-17 10:11:42.834076] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.644 10:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.644 10:11:42 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:15:09.644 10:11:42 -- common/autotest_common.sh@640 -- # local es=0 00:15:09.644 10:11:42 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:15:09.644 10:11:42 -- common/autotest_common.sh@628 -- # local arg=nvme 00:15:09.644 10:11:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:09.644 10:11:42 -- common/autotest_common.sh@632 -- # type -t nvme 00:15:09.644 10:11:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:09.644 10:11:42 -- common/autotest_common.sh@634 -- # type -P nvme 00:15:09.644 10:11:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:09.644 10:11:42 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:15:09.644 10:11:42 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:15:09.644 10:11:42 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:15:09.644 [2024-04-17 10:11:42.858754] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:15:09.644 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:09.644 could not add new controller: failed to write to nvme-fabrics device 00:15:09.644 10:11:42 -- common/autotest_common.sh@643 -- # es=1 00:15:09.644 10:11:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:09.644 10:11:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:09.644 10:11:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:09.644 10:11:42 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:09.644 10:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.644 10:11:42 -- common/autotest_common.sh@10 -- # set +x 00:15:09.644 10:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.644 10:11:42 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:11.019 10:11:44 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:11.019 10:11:44 -- common/autotest_common.sh@1177 -- # local i=0 00:15:11.019 10:11:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:11.019 10:11:44 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:11.019 10:11:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:12.920 10:11:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:12.920 10:11:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:12.920 10:11:46 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:12.920 10:11:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:12.920 10:11:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:12.920 10:11:46 -- common/autotest_common.sh@1187 -- # return 0 00:15:12.920 10:11:46 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:13.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.179 10:11:46 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:13.179 10:11:46 -- common/autotest_common.sh@1198 -- # local i=0 00:15:13.179 10:11:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:13.179 10:11:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:13.179 10:11:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:13.179 10:11:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:13.179 10:11:46 -- common/autotest_common.sh@1210 -- # return 0 00:15:13.179 10:11:46 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:13.179 10:11:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:13.179 10:11:46 -- common/autotest_common.sh@10 -- # set +x 00:15:13.179 10:11:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:13.179 10:11:46 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:13.179 10:11:46 -- common/autotest_common.sh@640 -- # local es=0 00:15:13.179 10:11:46 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:13.179 10:11:46 -- common/autotest_common.sh@628 -- # local arg=nvme 00:15:13.179 10:11:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:13.179 10:11:46 -- common/autotest_common.sh@632 -- # type -t nvme 00:15:13.179 10:11:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:13.179 10:11:46 -- common/autotest_common.sh@634 -- # type -P nvme 00:15:13.179 10:11:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:13.179 10:11:46 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:15:13.179 10:11:46 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:15:13.179 10:11:46 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:13.179 [2024-04-17 10:11:46.358138] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:15:13.179 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:13.179 could not add new controller: failed to write to nvme-fabrics device 00:15:13.179 10:11:46 -- common/autotest_common.sh@643 -- # es=1 00:15:13.179 10:11:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:13.179 10:11:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:13.179 10:11:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:13.179 10:11:46 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:13.179 10:11:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:13.179 10:11:46 -- common/autotest_common.sh@10 -- # set +x 00:15:13.179 10:11:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:13.179 10:11:46 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:14.554 10:11:47 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:14.554 10:11:47 -- common/autotest_common.sh@1177 -- # local i=0 00:15:14.554 10:11:47 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:14.554 10:11:47 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:14.554 10:11:47 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:16.536 10:11:49 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:16.536 10:11:49 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:16.536 10:11:49 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:16.536 10:11:49 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:16.536 10:11:49 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:16.536 10:11:49 -- common/autotest_common.sh@1187 -- # return 0 00:15:16.536 10:11:49 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:16.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.536 10:11:49 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:16.536 10:11:49 -- common/autotest_common.sh@1198 -- # local i=0 00:15:16.536 10:11:49 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:16.536 10:11:49 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.536 10:11:49 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:16.536 10:11:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:16.536 10:11:49 -- common/autotest_common.sh@1210 -- # return 0 00:15:16.536 10:11:49 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:16.536 10:11:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.536 10:11:49 -- common/autotest_common.sh@10 -- # set +x 00:15:16.804 10:11:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.804 10:11:49 -- target/rpc.sh@81 -- # seq 1 5 00:15:16.804 10:11:49 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:16.804 10:11:49 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:16.804 10:11:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.804 10:11:49 -- common/autotest_common.sh@10 -- # set +x 00:15:16.804 10:11:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.804 10:11:49 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:16.804 10:11:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.804 10:11:49 -- common/autotest_common.sh@10 -- # set +x 00:15:16.804 [2024-04-17 10:11:49.892932] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:16.804 10:11:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.804 10:11:49 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:16.804 10:11:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.804 10:11:49 -- common/autotest_common.sh@10 -- # set +x 00:15:16.804 10:11:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.804 10:11:49 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:16.804 10:11:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:16.804 10:11:49 -- common/autotest_common.sh@10 -- # set +x 00:15:16.804 10:11:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:16.804 10:11:49 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:18.179 10:11:51 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:18.179 10:11:51 -- common/autotest_common.sh@1177 -- # local i=0 00:15:18.179 10:11:51 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:18.179 10:11:51 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:18.179 10:11:51 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:20.079 10:11:53 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:20.079 10:11:53 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:20.079 10:11:53 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:20.079 10:11:53 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:20.079 10:11:53 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:20.079 10:11:53 -- common/autotest_common.sh@1187 -- # return 0 00:15:20.079 10:11:53 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:20.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.079 10:11:53 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:20.079 10:11:53 -- common/autotest_common.sh@1198 -- # local i=0 00:15:20.079 10:11:53 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:20.079 10:11:53 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:20.337 10:11:53 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:20.337 10:11:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:20.337 10:11:53 -- common/autotest_common.sh@1210 -- # return 0 00:15:20.337 10:11:53 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:20.337 10:11:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:20.337 10:11:53 -- common/autotest_common.sh@10 -- # set +x 00:15:20.337 10:11:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:20.337 10:11:53 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.337 10:11:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:20.337 10:11:53 -- common/autotest_common.sh@10 -- # set +x 00:15:20.337 10:11:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:20.337 10:11:53 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:20.337 10:11:53 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:20.337 10:11:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:20.337 10:11:53 -- common/autotest_common.sh@10 -- # set +x 00:15:20.337 10:11:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:20.337 10:11:53 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:20.337 10:11:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:20.337 10:11:53 -- common/autotest_common.sh@10 -- # set +x 00:15:20.337 [2024-04-17 10:11:53.457529] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:20.337 10:11:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:20.337 10:11:53 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:20.337 10:11:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:20.337 10:11:53 -- common/autotest_common.sh@10 -- # set +x 00:15:20.337 10:11:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:20.337 10:11:53 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:20.337 10:11:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:20.337 10:11:53 -- common/autotest_common.sh@10 -- # set +x 00:15:20.337 10:11:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:20.337 10:11:53 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:21.711 10:11:54 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:21.711 10:11:54 -- common/autotest_common.sh@1177 -- # local i=0 00:15:21.711 10:11:54 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:21.711 10:11:54 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:21.711 10:11:54 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:23.613 10:11:56 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:23.613 10:11:56 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:23.613 10:11:56 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:23.613 10:11:56 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:23.613 10:11:56 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:23.613 10:11:56 -- common/autotest_common.sh@1187 -- # return 0 00:15:23.613 10:11:56 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:23.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.613 10:11:56 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:23.613 10:11:56 -- common/autotest_common.sh@1198 -- # local i=0 00:15:23.613 10:11:56 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:23.613 10:11:56 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:23.613 10:11:56 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:23.613 10:11:56 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:23.870 10:11:56 -- common/autotest_common.sh@1210 -- # return 0 00:15:23.870 10:11:56 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:23.870 10:11:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.870 10:11:56 -- common/autotest_common.sh@10 -- # set +x 00:15:23.870 10:11:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.870 10:11:56 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:23.870 10:11:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.870 10:11:56 -- common/autotest_common.sh@10 -- # set +x 00:15:23.870 10:11:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.870 10:11:56 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:23.870 10:11:56 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:23.870 10:11:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.870 10:11:56 -- common/autotest_common.sh@10 -- # set +x 00:15:23.870 10:11:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.870 10:11:56 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:23.870 10:11:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.870 10:11:56 -- common/autotest_common.sh@10 -- # set +x 00:15:23.870 [2024-04-17 10:11:56.981128] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.870 10:11:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.870 10:11:56 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:23.870 10:11:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.870 10:11:56 -- common/autotest_common.sh@10 -- # set +x 00:15:23.870 10:11:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.870 10:11:56 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:23.870 10:11:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.870 10:11:56 -- common/autotest_common.sh@10 -- # set +x 00:15:23.870 10:11:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.870 10:11:57 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:25.241 10:11:58 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:25.241 10:11:58 -- common/autotest_common.sh@1177 -- # local i=0 00:15:25.241 10:11:58 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:25.241 10:11:58 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:25.241 10:11:58 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:27.140 10:12:00 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:27.140 10:12:00 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:27.140 10:12:00 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:27.140 10:12:00 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:27.140 10:12:00 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:27.140 10:12:00 -- common/autotest_common.sh@1187 -- # return 0 00:15:27.140 10:12:00 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:27.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.140 10:12:00 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:27.140 10:12:00 -- common/autotest_common.sh@1198 -- # local i=0 00:15:27.140 10:12:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:27.140 10:12:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:27.140 10:12:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:27.140 10:12:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:27.140 10:12:00 -- common/autotest_common.sh@1210 -- # return 0 00:15:27.140 10:12:00 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:27.140 10:12:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:27.140 10:12:00 -- common/autotest_common.sh@10 -- # set +x 00:15:27.140 10:12:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:27.140 10:12:00 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:27.140 10:12:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:27.140 10:12:00 -- common/autotest_common.sh@10 -- # set +x 00:15:27.140 10:12:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:27.140 10:12:00 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:27.140 10:12:00 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:27.140 10:12:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:27.140 10:12:00 -- common/autotest_common.sh@10 -- # set +x 00:15:27.140 10:12:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:27.140 10:12:00 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:27.140 10:12:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:27.140 10:12:00 -- common/autotest_common.sh@10 -- # set +x 00:15:27.140 [2024-04-17 10:12:00.408169] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:27.140 10:12:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:27.140 10:12:00 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:27.140 10:12:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:27.140 10:12:00 -- common/autotest_common.sh@10 -- # set +x 00:15:27.140 10:12:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:27.140 10:12:00 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:27.140 10:12:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:27.140 10:12:00 -- common/autotest_common.sh@10 -- # set +x 00:15:27.140 10:12:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:27.140 10:12:00 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:28.514 10:12:01 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:28.514 10:12:01 -- common/autotest_common.sh@1177 -- # local i=0 00:15:28.514 10:12:01 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:28.514 10:12:01 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:28.514 10:12:01 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:31.045 10:12:03 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:31.045 10:12:03 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:31.045 10:12:03 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:31.045 10:12:03 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:31.045 10:12:03 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:31.045 10:12:03 -- common/autotest_common.sh@1187 -- # return 0 00:15:31.045 10:12:03 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:31.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.045 10:12:03 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:31.045 10:12:03 -- common/autotest_common.sh@1198 -- # local i=0 00:15:31.045 10:12:03 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:31.045 10:12:03 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:31.045 10:12:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:31.046 10:12:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:31.046 10:12:03 -- common/autotest_common.sh@1210 -- # return 0 00:15:31.046 10:12:03 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:31.046 10:12:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.046 10:12:03 -- common/autotest_common.sh@10 -- # set +x 00:15:31.046 10:12:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.046 10:12:03 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:31.046 10:12:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.046 10:12:03 -- common/autotest_common.sh@10 -- # set +x 00:15:31.046 10:12:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.046 10:12:03 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:31.046 10:12:03 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:31.046 10:12:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.046 10:12:03 -- common/autotest_common.sh@10 -- # set +x 00:15:31.046 10:12:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.046 10:12:03 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:31.046 10:12:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.046 10:12:03 -- common/autotest_common.sh@10 -- # set +x 00:15:31.046 [2024-04-17 10:12:03.913077] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:31.046 10:12:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.046 10:12:03 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:31.046 10:12:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.046 10:12:03 -- common/autotest_common.sh@10 -- # set +x 00:15:31.046 10:12:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.046 10:12:03 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:31.046 10:12:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.046 10:12:03 -- common/autotest_common.sh@10 -- # set +x 00:15:31.046 10:12:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.046 10:12:03 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:31.979 10:12:05 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:31.979 10:12:05 -- common/autotest_common.sh@1177 -- # local i=0 00:15:31.979 10:12:05 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:31.979 10:12:05 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:31.979 10:12:05 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:34.507 10:12:07 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:34.507 10:12:07 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:34.507 10:12:07 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:34.507 10:12:07 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:34.507 10:12:07 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:34.507 10:12:07 -- common/autotest_common.sh@1187 -- # return 0 00:15:34.507 10:12:07 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:34.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.507 10:12:07 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:34.507 10:12:07 -- common/autotest_common.sh@1198 -- # local i=0 00:15:34.507 10:12:07 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:34.507 10:12:07 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:34.507 10:12:07 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:34.507 10:12:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:34.507 10:12:07 -- common/autotest_common.sh@1210 -- # return 0 00:15:34.507 10:12:07 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:34.507 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.507 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.507 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.507 10:12:07 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:34.507 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.507 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.507 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.507 10:12:07 -- target/rpc.sh@99 -- # seq 1 5 00:15:34.507 10:12:07 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:34.507 10:12:07 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:34.507 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.507 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.507 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.507 10:12:07 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:34.507 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.507 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.507 [2024-04-17 10:12:07.443127] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.507 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.507 10:12:07 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:34.507 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.507 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.507 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.507 10:12:07 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:34.507 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.507 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.507 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.507 10:12:07 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:34.507 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.507 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.507 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.507 10:12:07 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:34.507 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.507 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.507 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.507 10:12:07 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:34.507 10:12:07 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:34.507 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.507 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.507 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.507 10:12:07 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:34.507 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.507 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.507 [2024-04-17 10:12:07.491248] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.507 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.507 10:12:07 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:34.507 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.507 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.507 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.507 10:12:07 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:34.507 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.507 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.507 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.507 10:12:07 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:34.507 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.507 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.507 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.507 10:12:07 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:34.507 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.507 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.507 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.507 10:12:07 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:34.507 10:12:07 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:34.507 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.507 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.507 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.507 10:12:07 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:34.507 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.507 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.507 [2024-04-17 10:12:07.539434] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.507 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.507 10:12:07 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:34.507 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.507 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.507 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.507 10:12:07 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:34.507 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.507 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.507 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.507 10:12:07 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:34.507 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.507 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.507 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.507 10:12:07 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:34.507 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.507 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.507 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.507 10:12:07 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:34.507 10:12:07 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:34.507 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.508 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.508 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.508 10:12:07 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:34.508 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.508 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.508 [2024-04-17 10:12:07.591611] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.508 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.508 10:12:07 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:34.508 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.508 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.508 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.508 10:12:07 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:34.508 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.508 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.508 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.508 10:12:07 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:34.508 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.508 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.508 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.508 10:12:07 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:34.508 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.508 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.508 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.508 10:12:07 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:34.508 10:12:07 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:34.508 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.508 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.508 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.508 10:12:07 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:34.508 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.508 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.508 [2024-04-17 10:12:07.639786] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.508 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.508 10:12:07 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:34.508 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.508 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.508 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.508 10:12:07 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:34.508 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.508 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.508 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.508 10:12:07 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:34.508 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.508 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.508 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.508 10:12:07 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:34.508 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.508 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.508 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.508 10:12:07 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:34.508 10:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.508 10:12:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.508 10:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.508 10:12:07 -- target/rpc.sh@110 -- # stats='{ 00:15:34.508 "tick_rate": 2200000000, 00:15:34.508 "poll_groups": [ 00:15:34.508 { 00:15:34.508 "name": "nvmf_tgt_poll_group_0", 00:15:34.508 "admin_qpairs": 2, 00:15:34.508 "io_qpairs": 196, 00:15:34.508 "current_admin_qpairs": 0, 00:15:34.508 "current_io_qpairs": 0, 00:15:34.508 "pending_bdev_io": 0, 00:15:34.508 "completed_nvme_io": 346, 00:15:34.508 "transports": [ 00:15:34.508 { 00:15:34.508 "trtype": "TCP" 00:15:34.508 } 00:15:34.508 ] 00:15:34.508 }, 00:15:34.508 { 00:15:34.508 "name": "nvmf_tgt_poll_group_1", 00:15:34.508 "admin_qpairs": 2, 00:15:34.508 "io_qpairs": 196, 00:15:34.508 "current_admin_qpairs": 0, 00:15:34.508 "current_io_qpairs": 0, 00:15:34.508 "pending_bdev_io": 0, 00:15:34.508 "completed_nvme_io": 246, 00:15:34.508 "transports": [ 00:15:34.508 { 00:15:34.508 "trtype": "TCP" 00:15:34.508 } 00:15:34.508 ] 00:15:34.508 }, 00:15:34.508 { 00:15:34.508 "name": "nvmf_tgt_poll_group_2", 00:15:34.508 "admin_qpairs": 1, 00:15:34.508 "io_qpairs": 196, 00:15:34.508 "current_admin_qpairs": 0, 00:15:34.508 "current_io_qpairs": 0, 00:15:34.508 "pending_bdev_io": 0, 00:15:34.508 "completed_nvme_io": 247, 00:15:34.508 "transports": [ 00:15:34.508 { 00:15:34.508 "trtype": "TCP" 00:15:34.508 } 00:15:34.508 ] 00:15:34.508 }, 00:15:34.508 { 00:15:34.508 "name": "nvmf_tgt_poll_group_3", 00:15:34.508 "admin_qpairs": 2, 00:15:34.508 "io_qpairs": 196, 00:15:34.508 "current_admin_qpairs": 0, 00:15:34.508 "current_io_qpairs": 0, 00:15:34.508 "pending_bdev_io": 0, 00:15:34.508 "completed_nvme_io": 295, 00:15:34.508 "transports": [ 00:15:34.508 { 00:15:34.508 "trtype": "TCP" 00:15:34.508 } 00:15:34.508 ] 00:15:34.508 } 00:15:34.508 ] 00:15:34.508 }' 00:15:34.508 10:12:07 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:34.508 10:12:07 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:34.508 10:12:07 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:34.508 10:12:07 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:34.508 10:12:07 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:34.508 10:12:07 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:34.508 10:12:07 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:34.508 10:12:07 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:34.508 10:12:07 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:34.508 10:12:07 -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:15:34.508 10:12:07 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:34.508 10:12:07 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:34.508 10:12:07 -- target/rpc.sh@123 -- # nvmftestfini 00:15:34.508 10:12:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:34.508 10:12:07 -- nvmf/common.sh@116 -- # sync 00:15:34.508 10:12:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:34.508 10:12:07 -- nvmf/common.sh@119 -- # set +e 00:15:34.508 10:12:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:34.508 10:12:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:34.508 rmmod nvme_tcp 00:15:34.508 rmmod nvme_fabrics 00:15:34.766 rmmod nvme_keyring 00:15:34.766 10:12:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:34.766 10:12:07 -- nvmf/common.sh@123 -- # set -e 00:15:34.766 10:12:07 -- nvmf/common.sh@124 -- # return 0 00:15:34.766 10:12:07 -- nvmf/common.sh@477 -- # '[' -n 3373516 ']' 00:15:34.766 10:12:07 -- nvmf/common.sh@478 -- # killprocess 3373516 00:15:34.766 10:12:07 -- common/autotest_common.sh@926 -- # '[' -z 3373516 ']' 00:15:34.766 10:12:07 -- common/autotest_common.sh@930 -- # kill -0 3373516 00:15:34.766 10:12:07 -- common/autotest_common.sh@931 -- # uname 00:15:34.766 10:12:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:34.766 10:12:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3373516 00:15:34.766 10:12:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:34.766 10:12:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:34.766 10:12:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3373516' 00:15:34.766 killing process with pid 3373516 00:15:34.766 10:12:07 -- common/autotest_common.sh@945 -- # kill 3373516 00:15:34.766 10:12:07 -- common/autotest_common.sh@950 -- # wait 3373516 00:15:35.025 10:12:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:35.025 10:12:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:35.025 10:12:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:35.025 10:12:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:35.025 10:12:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:35.025 10:12:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.025 10:12:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:35.025 10:12:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.928 10:12:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:36.928 00:15:36.928 real 0m34.496s 00:15:36.928 user 1m47.052s 00:15:36.928 sys 0m6.110s 00:15:36.928 10:12:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:36.928 10:12:10 -- common/autotest_common.sh@10 -- # set +x 00:15:36.928 ************************************ 00:15:36.928 END TEST nvmf_rpc 00:15:36.928 ************************************ 00:15:37.186 10:12:10 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:37.186 10:12:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:37.186 10:12:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:37.186 10:12:10 -- common/autotest_common.sh@10 -- # set +x 00:15:37.186 ************************************ 00:15:37.186 START TEST nvmf_invalid 00:15:37.186 ************************************ 00:15:37.186 10:12:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:37.186 * Looking for test storage... 00:15:37.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:37.186 10:12:10 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:37.186 10:12:10 -- nvmf/common.sh@7 -- # uname -s 00:15:37.186 10:12:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.186 10:12:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.186 10:12:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.186 10:12:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.186 10:12:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.186 10:12:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.186 10:12:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.186 10:12:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.186 10:12:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.186 10:12:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.186 10:12:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:37.186 10:12:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:37.186 10:12:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.186 10:12:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.186 10:12:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:37.186 10:12:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:37.186 10:12:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.186 10:12:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.186 10:12:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.186 10:12:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.186 10:12:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.186 10:12:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.186 10:12:10 -- paths/export.sh@5 -- # export PATH 00:15:37.186 10:12:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.186 10:12:10 -- nvmf/common.sh@46 -- # : 0 00:15:37.186 10:12:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:37.186 10:12:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:37.187 10:12:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:37.187 10:12:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.187 10:12:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.187 10:12:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:37.187 10:12:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:37.187 10:12:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:37.187 10:12:10 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:37.187 10:12:10 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:37.187 10:12:10 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:37.187 10:12:10 -- target/invalid.sh@14 -- # target=foobar 00:15:37.187 10:12:10 -- target/invalid.sh@16 -- # RANDOM=0 00:15:37.187 10:12:10 -- target/invalid.sh@34 -- # nvmftestinit 00:15:37.187 10:12:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:37.187 10:12:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:37.187 10:12:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:37.187 10:12:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:37.187 10:12:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:37.187 10:12:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.187 10:12:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:37.187 10:12:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.187 10:12:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:37.187 10:12:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:37.187 10:12:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:37.187 10:12:10 -- common/autotest_common.sh@10 -- # set +x 00:15:43.756 10:12:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:43.756 10:12:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:43.756 10:12:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:43.756 10:12:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:43.756 10:12:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:43.756 10:12:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:43.756 10:12:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:43.756 10:12:15 -- nvmf/common.sh@294 -- # net_devs=() 00:15:43.756 10:12:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:43.756 10:12:15 -- nvmf/common.sh@295 -- # e810=() 00:15:43.756 10:12:15 -- nvmf/common.sh@295 -- # local -ga e810 00:15:43.756 10:12:15 -- nvmf/common.sh@296 -- # x722=() 00:15:43.756 10:12:15 -- nvmf/common.sh@296 -- # local -ga x722 00:15:43.756 10:12:15 -- nvmf/common.sh@297 -- # mlx=() 00:15:43.756 10:12:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:43.756 10:12:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:43.756 10:12:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:43.756 10:12:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:43.756 10:12:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:43.756 10:12:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:43.756 10:12:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:43.756 10:12:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:43.756 10:12:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:43.756 10:12:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:43.756 10:12:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:43.757 10:12:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:43.757 10:12:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:43.757 10:12:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:43.757 10:12:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:43.757 10:12:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:43.757 10:12:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:43.757 10:12:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:43.757 10:12:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:43.757 10:12:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:43.757 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:43.757 10:12:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:43.757 10:12:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:43.757 10:12:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:43.757 10:12:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:43.757 10:12:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:43.757 10:12:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:43.757 10:12:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:43.757 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:43.757 10:12:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:43.757 10:12:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:43.757 10:12:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:43.757 10:12:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:43.757 10:12:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:43.757 10:12:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:43.757 10:12:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:43.757 10:12:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:43.757 10:12:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:43.757 10:12:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.757 10:12:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:43.757 10:12:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.757 10:12:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:43.757 Found net devices under 0000:af:00.0: cvl_0_0 00:15:43.757 10:12:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.757 10:12:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:43.757 10:12:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.757 10:12:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:43.757 10:12:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.757 10:12:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:43.757 Found net devices under 0000:af:00.1: cvl_0_1 00:15:43.757 10:12:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.757 10:12:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:43.757 10:12:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:43.757 10:12:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:43.757 10:12:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:43.757 10:12:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:43.757 10:12:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:43.757 10:12:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:43.757 10:12:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:43.757 10:12:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:43.757 10:12:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:43.757 10:12:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:43.757 10:12:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:43.757 10:12:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:43.757 10:12:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:43.757 10:12:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:43.757 10:12:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:43.757 10:12:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:43.757 10:12:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:43.757 10:12:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:43.757 10:12:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:43.757 10:12:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:43.757 10:12:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:43.757 10:12:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:43.757 10:12:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:43.757 10:12:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:43.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:43.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:15:43.757 00:15:43.757 --- 10.0.0.2 ping statistics --- 00:15:43.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.757 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:15:43.757 10:12:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:43.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:43.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:15:43.757 00:15:43.757 --- 10.0.0.1 ping statistics --- 00:15:43.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.757 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:15:43.757 10:12:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:43.757 10:12:16 -- nvmf/common.sh@410 -- # return 0 00:15:43.757 10:12:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:43.757 10:12:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:43.757 10:12:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:43.757 10:12:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:43.757 10:12:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:43.757 10:12:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:43.757 10:12:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:43.757 10:12:16 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:43.757 10:12:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:43.757 10:12:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:43.757 10:12:16 -- common/autotest_common.sh@10 -- # set +x 00:15:43.757 10:12:16 -- nvmf/common.sh@469 -- # nvmfpid=3382468 00:15:43.757 10:12:16 -- nvmf/common.sh@470 -- # waitforlisten 3382468 00:15:43.757 10:12:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:43.757 10:12:16 -- common/autotest_common.sh@819 -- # '[' -z 3382468 ']' 00:15:43.757 10:12:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.757 10:12:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:43.757 10:12:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.757 10:12:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:43.757 10:12:16 -- common/autotest_common.sh@10 -- # set +x 00:15:43.757 [2024-04-17 10:12:16.214514] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:43.757 [2024-04-17 10:12:16.214555] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.757 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.757 [2024-04-17 10:12:16.289114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:43.757 [2024-04-17 10:12:16.378707] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:43.757 [2024-04-17 10:12:16.378854] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.757 [2024-04-17 10:12:16.378866] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.757 [2024-04-17 10:12:16.378875] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.757 [2024-04-17 10:12:16.380498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.757 [2024-04-17 10:12:16.380531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:43.757 [2024-04-17 10:12:16.380588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:43.757 [2024-04-17 10:12:16.380589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.757 10:12:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:43.757 10:12:17 -- common/autotest_common.sh@852 -- # return 0 00:15:43.757 10:12:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:43.757 10:12:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:43.757 10:12:17 -- common/autotest_common.sh@10 -- # set +x 00:15:44.044 10:12:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:44.044 10:12:17 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:44.044 10:12:17 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode26952 00:15:44.044 [2024-04-17 10:12:17.331924] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:44.044 10:12:17 -- target/invalid.sh@40 -- # out='request: 00:15:44.044 { 00:15:44.044 "nqn": "nqn.2016-06.io.spdk:cnode26952", 00:15:44.044 "tgt_name": "foobar", 00:15:44.044 "method": "nvmf_create_subsystem", 00:15:44.044 "req_id": 1 00:15:44.044 } 00:15:44.044 Got JSON-RPC error response 00:15:44.044 response: 00:15:44.044 { 00:15:44.044 "code": -32603, 00:15:44.044 "message": "Unable to find target foobar" 00:15:44.044 }' 00:15:44.044 10:12:17 -- target/invalid.sh@41 -- # [[ request: 00:15:44.044 { 00:15:44.044 "nqn": "nqn.2016-06.io.spdk:cnode26952", 00:15:44.044 "tgt_name": "foobar", 00:15:44.044 "method": "nvmf_create_subsystem", 00:15:44.044 "req_id": 1 00:15:44.044 } 00:15:44.044 Got JSON-RPC error response 00:15:44.044 response: 00:15:44.044 { 00:15:44.044 "code": -32603, 00:15:44.044 "message": "Unable to find target foobar" 00:15:44.044 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:44.044 10:12:17 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:44.044 10:12:17 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode32428 00:15:44.311 [2024-04-17 10:12:17.580915] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32428: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:44.311 10:12:17 -- target/invalid.sh@45 -- # out='request: 00:15:44.311 { 00:15:44.311 "nqn": "nqn.2016-06.io.spdk:cnode32428", 00:15:44.311 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:44.311 "method": "nvmf_create_subsystem", 00:15:44.311 "req_id": 1 00:15:44.312 } 00:15:44.312 Got JSON-RPC error response 00:15:44.312 response: 00:15:44.312 { 00:15:44.312 "code": -32602, 00:15:44.312 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:44.312 }' 00:15:44.312 10:12:17 -- target/invalid.sh@46 -- # [[ request: 00:15:44.312 { 00:15:44.312 "nqn": "nqn.2016-06.io.spdk:cnode32428", 00:15:44.312 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:44.312 "method": "nvmf_create_subsystem", 00:15:44.312 "req_id": 1 00:15:44.312 } 00:15:44.312 Got JSON-RPC error response 00:15:44.312 response: 00:15:44.312 { 00:15:44.312 "code": -32602, 00:15:44.312 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:44.312 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:44.312 10:12:17 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:44.312 10:12:17 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode5722 00:15:44.569 [2024-04-17 10:12:17.833763] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5722: invalid model number 'SPDK_Controller' 00:15:44.569 10:12:17 -- target/invalid.sh@50 -- # out='request: 00:15:44.569 { 00:15:44.569 "nqn": "nqn.2016-06.io.spdk:cnode5722", 00:15:44.569 "model_number": "SPDK_Controller\u001f", 00:15:44.569 "method": "nvmf_create_subsystem", 00:15:44.569 "req_id": 1 00:15:44.569 } 00:15:44.569 Got JSON-RPC error response 00:15:44.569 response: 00:15:44.569 { 00:15:44.569 "code": -32602, 00:15:44.569 "message": "Invalid MN SPDK_Controller\u001f" 00:15:44.569 }' 00:15:44.569 10:12:17 -- target/invalid.sh@51 -- # [[ request: 00:15:44.569 { 00:15:44.569 "nqn": "nqn.2016-06.io.spdk:cnode5722", 00:15:44.569 "model_number": "SPDK_Controller\u001f", 00:15:44.569 "method": "nvmf_create_subsystem", 00:15:44.569 "req_id": 1 00:15:44.569 } 00:15:44.569 Got JSON-RPC error response 00:15:44.569 response: 00:15:44.569 { 00:15:44.569 "code": -32602, 00:15:44.569 "message": "Invalid MN SPDK_Controller\u001f" 00:15:44.569 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:44.569 10:12:17 -- target/invalid.sh@54 -- # gen_random_s 21 00:15:44.569 10:12:17 -- target/invalid.sh@19 -- # local length=21 ll 00:15:44.569 10:12:17 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:44.569 10:12:17 -- target/invalid.sh@21 -- # local chars 00:15:44.569 10:12:17 -- target/invalid.sh@22 -- # local string 00:15:44.569 10:12:17 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:44.569 10:12:17 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:44.569 10:12:17 -- target/invalid.sh@25 -- # printf %x 77 00:15:44.569 10:12:17 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:15:44.569 10:12:17 -- target/invalid.sh@25 -- # string+=M 00:15:44.569 10:12:17 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:44.569 10:12:17 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:44.569 10:12:17 -- target/invalid.sh@25 -- # printf %x 46 00:15:44.569 10:12:17 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:15:44.569 10:12:17 -- target/invalid.sh@25 -- # string+=. 00:15:44.569 10:12:17 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:44.569 10:12:17 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:44.569 10:12:17 -- target/invalid.sh@25 -- # printf %x 72 00:15:44.569 10:12:17 -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:44.569 10:12:17 -- target/invalid.sh@25 -- # string+=H 00:15:44.569 10:12:17 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:44.569 10:12:17 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:44.569 10:12:17 -- target/invalid.sh@25 -- # printf %x 71 00:15:44.569 10:12:17 -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:44.569 10:12:17 -- target/invalid.sh@25 -- # string+=G 00:15:44.569 10:12:17 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:44.569 10:12:17 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:44.569 10:12:17 -- target/invalid.sh@25 -- # printf %x 50 00:15:44.569 10:12:17 -- target/invalid.sh@25 -- # echo -e '\x32' 00:15:44.569 10:12:17 -- target/invalid.sh@25 -- # string+=2 00:15:44.569 10:12:17 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:44.569 10:12:17 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:44.827 10:12:17 -- target/invalid.sh@25 -- # printf %x 124 00:15:44.827 10:12:17 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:15:44.827 10:12:17 -- target/invalid.sh@25 -- # string+='|' 00:15:44.827 10:12:17 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:44.827 10:12:17 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:44.827 10:12:17 -- target/invalid.sh@25 -- # printf %x 89 00:15:44.827 10:12:17 -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:44.827 10:12:17 -- target/invalid.sh@25 -- # string+=Y 00:15:44.827 10:12:17 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:44.827 10:12:17 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:44.827 10:12:17 -- target/invalid.sh@25 -- # printf %x 109 00:15:44.827 10:12:17 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:44.827 10:12:17 -- target/invalid.sh@25 -- # string+=m 00:15:44.827 10:12:17 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:44.827 10:12:17 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:44.827 10:12:17 -- target/invalid.sh@25 -- # printf %x 113 00:15:44.827 10:12:17 -- target/invalid.sh@25 -- # echo -e '\x71' 00:15:44.827 10:12:17 -- target/invalid.sh@25 -- # string+=q 00:15:44.827 10:12:17 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:44.827 10:12:17 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:44.827 10:12:17 -- target/invalid.sh@25 -- # printf %x 42 00:15:44.827 10:12:17 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:15:44.827 10:12:17 -- target/invalid.sh@25 -- # string+='*' 00:15:44.827 10:12:17 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:44.827 10:12:17 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:44.827 10:12:17 -- target/invalid.sh@25 -- # printf %x 99 00:15:44.827 10:12:17 -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:44.827 10:12:17 -- target/invalid.sh@25 -- # string+=c 00:15:44.827 10:12:17 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:44.827 10:12:17 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:44.827 10:12:17 -- target/invalid.sh@25 -- # printf %x 103 00:15:44.827 10:12:17 -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:44.827 10:12:17 -- target/invalid.sh@25 -- # string+=g 00:15:44.827 10:12:17 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:44.827 10:12:17 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:44.827 10:12:17 -- target/invalid.sh@25 -- # printf %x 122 00:15:44.827 10:12:17 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:15:44.827 10:12:17 -- target/invalid.sh@25 -- # string+=z 00:15:44.827 10:12:17 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:44.827 10:12:17 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:44.828 10:12:17 -- target/invalid.sh@25 -- # printf %x 37 00:15:44.828 10:12:17 -- target/invalid.sh@25 -- # echo -e '\x25' 00:15:44.828 10:12:17 -- target/invalid.sh@25 -- # string+=% 00:15:44.828 10:12:17 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:44.828 10:12:17 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:44.828 10:12:17 -- target/invalid.sh@25 -- # printf %x 90 00:15:44.828 10:12:17 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:44.828 10:12:17 -- target/invalid.sh@25 -- # string+=Z 00:15:44.828 10:12:17 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:44.828 10:12:17 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:44.828 10:12:17 -- target/invalid.sh@25 -- # printf %x 71 00:15:44.828 10:12:17 -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:44.828 10:12:17 -- target/invalid.sh@25 -- # string+=G 00:15:44.828 10:12:17 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:44.828 10:12:17 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:44.828 10:12:17 -- target/invalid.sh@25 -- # printf %x 121 00:15:44.828 10:12:17 -- target/invalid.sh@25 -- # echo -e '\x79' 00:15:44.828 10:12:17 -- target/invalid.sh@25 -- # string+=y 00:15:44.828 10:12:17 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:44.828 10:12:17 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:44.828 10:12:17 -- target/invalid.sh@25 -- # printf %x 76 00:15:44.828 10:12:17 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:15:44.828 10:12:17 -- target/invalid.sh@25 -- # string+=L 00:15:44.828 10:12:17 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:44.828 10:12:17 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:44.828 10:12:17 -- target/invalid.sh@25 -- # printf %x 96 00:15:44.828 10:12:17 -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:44.828 10:12:17 -- target/invalid.sh@25 -- # string+='`' 00:15:44.828 10:12:17 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:44.828 10:12:17 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:44.828 10:12:17 -- target/invalid.sh@25 -- # printf %x 95 00:15:44.828 10:12:17 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:15:44.828 10:12:17 -- target/invalid.sh@25 -- # string+=_ 00:15:44.828 10:12:17 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:44.828 10:12:17 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:44.828 10:12:17 -- target/invalid.sh@25 -- # printf %x 80 00:15:44.828 10:12:17 -- target/invalid.sh@25 -- # echo -e '\x50' 00:15:44.828 10:12:17 -- target/invalid.sh@25 -- # string+=P 00:15:44.828 10:12:17 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:44.828 10:12:17 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:44.828 10:12:17 -- target/invalid.sh@28 -- # [[ M == \- ]] 00:15:44.828 10:12:17 -- target/invalid.sh@31 -- # echo 'M.HG2|Ymq*cgz%ZGyL`_P' 00:15:44.828 10:12:17 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'M.HG2|Ymq*cgz%ZGyL`_P' nqn.2016-06.io.spdk:cnode2563 00:15:45.087 [2024-04-17 10:12:18.219155] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2563: invalid serial number 'M.HG2|Ymq*cgz%ZGyL`_P' 00:15:45.087 10:12:18 -- target/invalid.sh@54 -- # out='request: 00:15:45.087 { 00:15:45.087 "nqn": "nqn.2016-06.io.spdk:cnode2563", 00:15:45.087 "serial_number": "M.HG2|Ymq*cgz%ZGyL`_P", 00:15:45.087 "method": "nvmf_create_subsystem", 00:15:45.087 "req_id": 1 00:15:45.087 } 00:15:45.087 Got JSON-RPC error response 00:15:45.087 response: 00:15:45.087 { 00:15:45.087 "code": -32602, 00:15:45.087 "message": "Invalid SN M.HG2|Ymq*cgz%ZGyL`_P" 00:15:45.087 }' 00:15:45.087 10:12:18 -- target/invalid.sh@55 -- # [[ request: 00:15:45.087 { 00:15:45.087 "nqn": "nqn.2016-06.io.spdk:cnode2563", 00:15:45.087 "serial_number": "M.HG2|Ymq*cgz%ZGyL`_P", 00:15:45.087 "method": "nvmf_create_subsystem", 00:15:45.087 "req_id": 1 00:15:45.087 } 00:15:45.087 Got JSON-RPC error response 00:15:45.087 response: 00:15:45.087 { 00:15:45.087 "code": -32602, 00:15:45.087 "message": "Invalid SN M.HG2|Ymq*cgz%ZGyL`_P" 00:15:45.087 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:45.087 10:12:18 -- target/invalid.sh@58 -- # gen_random_s 41 00:15:45.087 10:12:18 -- target/invalid.sh@19 -- # local length=41 ll 00:15:45.087 10:12:18 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:45.087 10:12:18 -- target/invalid.sh@21 -- # local chars 00:15:45.087 10:12:18 -- target/invalid.sh@22 -- # local string 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # printf %x 90 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # string+=Z 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # printf %x 62 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # string+='>' 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # printf %x 94 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # string+='^' 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # printf %x 94 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # string+='^' 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # printf %x 43 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # string+=+ 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # printf %x 101 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x65' 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # string+=e 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # printf %x 98 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x62' 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # string+=b 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # printf %x 119 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x77' 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # string+=w 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # printf %x 69 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x45' 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # string+=E 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # printf %x 45 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # string+=- 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # printf %x 33 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x21' 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # string+='!' 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # printf %x 98 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x62' 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # string+=b 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # printf %x 96 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # string+='`' 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # printf %x 103 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # string+=g 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # printf %x 122 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # string+=z 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # printf %x 116 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x74' 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # string+=t 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # printf %x 32 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x20' 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # string+=' ' 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # printf %x 42 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # string+='*' 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # printf %x 97 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # string+=a 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # printf %x 81 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x51' 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # string+=Q 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # printf %x 39 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x27' 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # string+=\' 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # printf %x 32 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x20' 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # string+=' ' 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # printf %x 125 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # string+='}' 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # printf %x 50 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x32' 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # string+=2 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # printf %x 95 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # string+=_ 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # printf %x 63 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # string+='?' 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # printf %x 99 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:45.087 10:12:18 -- target/invalid.sh@25 -- # string+=c 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.087 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # printf %x 101 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x65' 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # string+=e 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # printf %x 68 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x44' 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # string+=D 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # printf %x 94 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # string+='^' 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # printf %x 33 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x21' 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # string+='!' 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # printf %x 44 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # string+=, 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # printf %x 58 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # string+=: 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # printf %x 84 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # string+=T 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # printf %x 112 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # string+=p 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # printf %x 36 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x24' 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # string+='$' 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # printf %x 105 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # string+=i 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # printf %x 86 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # string+=V 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # printf %x 122 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # string+=z 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # printf %x 36 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x24' 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # string+='$' 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # printf %x 102 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # echo -e '\x66' 00:15:45.346 10:12:18 -- target/invalid.sh@25 -- # string+=f 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:45.346 10:12:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:45.346 10:12:18 -- target/invalid.sh@28 -- # [[ Z == \- ]] 00:15:45.346 10:12:18 -- target/invalid.sh@31 -- # echo 'Z>^^+ebwE-!b`gzt *aQ'\'' }2_?ceD^!,:Tp$iVz$f' 00:15:45.346 10:12:18 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Z>^^+ebwE-!b`gzt *aQ'\'' }2_?ceD^!,:Tp$iVz$f' nqn.2016-06.io.spdk:cnode21623 00:15:45.346 [2024-04-17 10:12:18.644701] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21623: invalid model number 'Z>^^+ebwE-!b`gzt *aQ' }2_?ceD^!,:Tp$iVz$f' 00:15:45.346 10:12:18 -- target/invalid.sh@58 -- # out='request: 00:15:45.346 { 00:15:45.346 "nqn": "nqn.2016-06.io.spdk:cnode21623", 00:15:45.346 "model_number": "Z>^^+ebwE-!b`gzt *aQ'\'' }2_?ceD^!,:Tp$iVz$f", 00:15:45.346 "method": "nvmf_create_subsystem", 00:15:45.346 "req_id": 1 00:15:45.346 } 00:15:45.347 Got JSON-RPC error response 00:15:45.347 response: 00:15:45.347 { 00:15:45.347 "code": -32602, 00:15:45.347 "message": "Invalid MN Z>^^+ebwE-!b`gzt *aQ'\'' }2_?ceD^!,:Tp$iVz$f" 00:15:45.347 }' 00:15:45.347 10:12:18 -- target/invalid.sh@59 -- # [[ request: 00:15:45.347 { 00:15:45.347 "nqn": "nqn.2016-06.io.spdk:cnode21623", 00:15:45.347 "model_number": "Z>^^+ebwE-!b`gzt *aQ' }2_?ceD^!,:Tp$iVz$f", 00:15:45.347 "method": "nvmf_create_subsystem", 00:15:45.347 "req_id": 1 00:15:45.347 } 00:15:45.347 Got JSON-RPC error response 00:15:45.347 response: 00:15:45.347 { 00:15:45.347 "code": -32602, 00:15:45.347 "message": "Invalid MN Z>^^+ebwE-!b`gzt *aQ' }2_?ceD^!,:Tp$iVz$f" 00:15:45.347 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:45.347 10:12:18 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:15:45.605 [2024-04-17 10:12:18.893714] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:45.605 10:12:18 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:45.863 10:12:19 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:15:45.863 10:12:19 -- target/invalid.sh@67 -- # echo '' 00:15:45.863 10:12:19 -- target/invalid.sh@67 -- # head -n 1 00:15:45.863 10:12:19 -- target/invalid.sh@67 -- # IP= 00:15:45.863 10:12:19 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:15:46.121 [2024-04-17 10:12:19.403677] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:46.121 10:12:19 -- target/invalid.sh@69 -- # out='request: 00:15:46.121 { 00:15:46.121 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:46.121 "listen_address": { 00:15:46.121 "trtype": "tcp", 00:15:46.121 "traddr": "", 00:15:46.121 "trsvcid": "4421" 00:15:46.121 }, 00:15:46.121 "method": "nvmf_subsystem_remove_listener", 00:15:46.121 "req_id": 1 00:15:46.121 } 00:15:46.121 Got JSON-RPC error response 00:15:46.121 response: 00:15:46.121 { 00:15:46.121 "code": -32602, 00:15:46.121 "message": "Invalid parameters" 00:15:46.121 }' 00:15:46.121 10:12:19 -- target/invalid.sh@70 -- # [[ request: 00:15:46.121 { 00:15:46.121 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:46.121 "listen_address": { 00:15:46.121 "trtype": "tcp", 00:15:46.121 "traddr": "", 00:15:46.121 "trsvcid": "4421" 00:15:46.121 }, 00:15:46.121 "method": "nvmf_subsystem_remove_listener", 00:15:46.121 "req_id": 1 00:15:46.121 } 00:15:46.121 Got JSON-RPC error response 00:15:46.121 response: 00:15:46.121 { 00:15:46.121 "code": -32602, 00:15:46.121 "message": "Invalid parameters" 00:15:46.121 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:46.121 10:12:19 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15762 -i 0 00:15:46.379 [2024-04-17 10:12:19.656591] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15762: invalid cntlid range [0-65519] 00:15:46.379 10:12:19 -- target/invalid.sh@73 -- # out='request: 00:15:46.379 { 00:15:46.379 "nqn": "nqn.2016-06.io.spdk:cnode15762", 00:15:46.379 "min_cntlid": 0, 00:15:46.379 "method": "nvmf_create_subsystem", 00:15:46.379 "req_id": 1 00:15:46.379 } 00:15:46.379 Got JSON-RPC error response 00:15:46.379 response: 00:15:46.379 { 00:15:46.379 "code": -32602, 00:15:46.379 "message": "Invalid cntlid range [0-65519]" 00:15:46.379 }' 00:15:46.379 10:12:19 -- target/invalid.sh@74 -- # [[ request: 00:15:46.379 { 00:15:46.380 "nqn": "nqn.2016-06.io.spdk:cnode15762", 00:15:46.380 "min_cntlid": 0, 00:15:46.380 "method": "nvmf_create_subsystem", 00:15:46.380 "req_id": 1 00:15:46.380 } 00:15:46.380 Got JSON-RPC error response 00:15:46.380 response: 00:15:46.380 { 00:15:46.380 "code": -32602, 00:15:46.380 "message": "Invalid cntlid range [0-65519]" 00:15:46.380 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:46.380 10:12:19 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2355 -i 65520 00:15:46.637 [2024-04-17 10:12:19.913485] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2355: invalid cntlid range [65520-65519] 00:15:46.637 10:12:19 -- target/invalid.sh@75 -- # out='request: 00:15:46.637 { 00:15:46.637 "nqn": "nqn.2016-06.io.spdk:cnode2355", 00:15:46.637 "min_cntlid": 65520, 00:15:46.637 "method": "nvmf_create_subsystem", 00:15:46.637 "req_id": 1 00:15:46.637 } 00:15:46.637 Got JSON-RPC error response 00:15:46.637 response: 00:15:46.637 { 00:15:46.637 "code": -32602, 00:15:46.637 "message": "Invalid cntlid range [65520-65519]" 00:15:46.637 }' 00:15:46.637 10:12:19 -- target/invalid.sh@76 -- # [[ request: 00:15:46.637 { 00:15:46.637 "nqn": "nqn.2016-06.io.spdk:cnode2355", 00:15:46.637 "min_cntlid": 65520, 00:15:46.637 "method": "nvmf_create_subsystem", 00:15:46.637 "req_id": 1 00:15:46.637 } 00:15:46.637 Got JSON-RPC error response 00:15:46.637 response: 00:15:46.637 { 00:15:46.637 "code": -32602, 00:15:46.637 "message": "Invalid cntlid range [65520-65519]" 00:15:46.637 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:46.637 10:12:19 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17278 -I 0 00:15:46.895 [2024-04-17 10:12:20.162428] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17278: invalid cntlid range [1-0] 00:15:46.895 10:12:20 -- target/invalid.sh@77 -- # out='request: 00:15:46.895 { 00:15:46.895 "nqn": "nqn.2016-06.io.spdk:cnode17278", 00:15:46.895 "max_cntlid": 0, 00:15:46.895 "method": "nvmf_create_subsystem", 00:15:46.895 "req_id": 1 00:15:46.895 } 00:15:46.895 Got JSON-RPC error response 00:15:46.895 response: 00:15:46.895 { 00:15:46.895 "code": -32602, 00:15:46.895 "message": "Invalid cntlid range [1-0]" 00:15:46.895 }' 00:15:46.895 10:12:20 -- target/invalid.sh@78 -- # [[ request: 00:15:46.895 { 00:15:46.895 "nqn": "nqn.2016-06.io.spdk:cnode17278", 00:15:46.895 "max_cntlid": 0, 00:15:46.895 "method": "nvmf_create_subsystem", 00:15:46.895 "req_id": 1 00:15:46.895 } 00:15:46.895 Got JSON-RPC error response 00:15:46.895 response: 00:15:46.895 { 00:15:46.895 "code": -32602, 00:15:46.895 "message": "Invalid cntlid range [1-0]" 00:15:46.895 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:46.895 10:12:20 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27112 -I 65520 00:15:47.153 [2024-04-17 10:12:20.415383] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27112: invalid cntlid range [1-65520] 00:15:47.153 10:12:20 -- target/invalid.sh@79 -- # out='request: 00:15:47.153 { 00:15:47.153 "nqn": "nqn.2016-06.io.spdk:cnode27112", 00:15:47.153 "max_cntlid": 65520, 00:15:47.153 "method": "nvmf_create_subsystem", 00:15:47.153 "req_id": 1 00:15:47.153 } 00:15:47.153 Got JSON-RPC error response 00:15:47.153 response: 00:15:47.153 { 00:15:47.153 "code": -32602, 00:15:47.153 "message": "Invalid cntlid range [1-65520]" 00:15:47.153 }' 00:15:47.153 10:12:20 -- target/invalid.sh@80 -- # [[ request: 00:15:47.153 { 00:15:47.153 "nqn": "nqn.2016-06.io.spdk:cnode27112", 00:15:47.153 "max_cntlid": 65520, 00:15:47.153 "method": "nvmf_create_subsystem", 00:15:47.153 "req_id": 1 00:15:47.153 } 00:15:47.153 Got JSON-RPC error response 00:15:47.153 response: 00:15:47.153 { 00:15:47.153 "code": -32602, 00:15:47.153 "message": "Invalid cntlid range [1-65520]" 00:15:47.153 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:47.153 10:12:20 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17952 -i 6 -I 5 00:15:47.411 [2024-04-17 10:12:20.668316] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17952: invalid cntlid range [6-5] 00:15:47.411 10:12:20 -- target/invalid.sh@83 -- # out='request: 00:15:47.411 { 00:15:47.411 "nqn": "nqn.2016-06.io.spdk:cnode17952", 00:15:47.411 "min_cntlid": 6, 00:15:47.411 "max_cntlid": 5, 00:15:47.411 "method": "nvmf_create_subsystem", 00:15:47.411 "req_id": 1 00:15:47.411 } 00:15:47.411 Got JSON-RPC error response 00:15:47.411 response: 00:15:47.411 { 00:15:47.411 "code": -32602, 00:15:47.411 "message": "Invalid cntlid range [6-5]" 00:15:47.411 }' 00:15:47.411 10:12:20 -- target/invalid.sh@84 -- # [[ request: 00:15:47.411 { 00:15:47.411 "nqn": "nqn.2016-06.io.spdk:cnode17952", 00:15:47.411 "min_cntlid": 6, 00:15:47.411 "max_cntlid": 5, 00:15:47.411 "method": "nvmf_create_subsystem", 00:15:47.411 "req_id": 1 00:15:47.411 } 00:15:47.411 Got JSON-RPC error response 00:15:47.411 response: 00:15:47.411 { 00:15:47.411 "code": -32602, 00:15:47.411 "message": "Invalid cntlid range [6-5]" 00:15:47.411 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:47.412 10:12:20 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:47.670 10:12:20 -- target/invalid.sh@87 -- # out='request: 00:15:47.670 { 00:15:47.670 "name": "foobar", 00:15:47.670 "method": "nvmf_delete_target", 00:15:47.670 "req_id": 1 00:15:47.671 } 00:15:47.671 Got JSON-RPC error response 00:15:47.671 response: 00:15:47.671 { 00:15:47.671 "code": -32602, 00:15:47.671 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:47.671 }' 00:15:47.671 10:12:20 -- target/invalid.sh@88 -- # [[ request: 00:15:47.671 { 00:15:47.671 "name": "foobar", 00:15:47.671 "method": "nvmf_delete_target", 00:15:47.671 "req_id": 1 00:15:47.671 } 00:15:47.671 Got JSON-RPC error response 00:15:47.671 response: 00:15:47.671 { 00:15:47.671 "code": -32602, 00:15:47.671 "message": "The specified target doesn't exist, cannot delete it." 00:15:47.671 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:47.671 10:12:20 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:47.671 10:12:20 -- target/invalid.sh@91 -- # nvmftestfini 00:15:47.671 10:12:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:47.671 10:12:20 -- nvmf/common.sh@116 -- # sync 00:15:47.671 10:12:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:47.671 10:12:20 -- nvmf/common.sh@119 -- # set +e 00:15:47.671 10:12:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:47.671 10:12:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:47.671 rmmod nvme_tcp 00:15:47.671 rmmod nvme_fabrics 00:15:47.671 rmmod nvme_keyring 00:15:47.671 10:12:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:47.671 10:12:20 -- nvmf/common.sh@123 -- # set -e 00:15:47.671 10:12:20 -- nvmf/common.sh@124 -- # return 0 00:15:47.671 10:12:20 -- nvmf/common.sh@477 -- # '[' -n 3382468 ']' 00:15:47.671 10:12:20 -- nvmf/common.sh@478 -- # killprocess 3382468 00:15:47.671 10:12:20 -- common/autotest_common.sh@926 -- # '[' -z 3382468 ']' 00:15:47.671 10:12:20 -- common/autotest_common.sh@930 -- # kill -0 3382468 00:15:47.671 10:12:20 -- common/autotest_common.sh@931 -- # uname 00:15:47.671 10:12:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:47.671 10:12:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3382468 00:15:47.671 10:12:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:47.671 10:12:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:47.671 10:12:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3382468' 00:15:47.671 killing process with pid 3382468 00:15:47.671 10:12:20 -- common/autotest_common.sh@945 -- # kill 3382468 00:15:47.671 10:12:20 -- common/autotest_common.sh@950 -- # wait 3382468 00:15:47.930 10:12:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:47.930 10:12:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:47.930 10:12:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:47.930 10:12:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:47.930 10:12:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:47.930 10:12:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.930 10:12:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.930 10:12:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.467 10:12:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:50.467 00:15:50.467 real 0m12.949s 00:15:50.467 user 0m23.585s 00:15:50.467 sys 0m5.353s 00:15:50.467 10:12:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:50.467 10:12:23 -- common/autotest_common.sh@10 -- # set +x 00:15:50.467 ************************************ 00:15:50.467 END TEST nvmf_invalid 00:15:50.467 ************************************ 00:15:50.467 10:12:23 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:15:50.467 10:12:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:50.467 10:12:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:50.467 10:12:23 -- common/autotest_common.sh@10 -- # set +x 00:15:50.467 ************************************ 00:15:50.467 START TEST nvmf_abort 00:15:50.467 ************************************ 00:15:50.467 10:12:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:15:50.467 * Looking for test storage... 00:15:50.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:50.467 10:12:23 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:50.467 10:12:23 -- nvmf/common.sh@7 -- # uname -s 00:15:50.467 10:12:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.467 10:12:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.467 10:12:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.467 10:12:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.467 10:12:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.467 10:12:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.467 10:12:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.467 10:12:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.467 10:12:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.467 10:12:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.467 10:12:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:50.467 10:12:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:50.467 10:12:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.467 10:12:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.467 10:12:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:50.467 10:12:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:50.467 10:12:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.467 10:12:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.467 10:12:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.467 10:12:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.467 10:12:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.467 10:12:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.468 10:12:23 -- paths/export.sh@5 -- # export PATH 00:15:50.468 10:12:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.468 10:12:23 -- nvmf/common.sh@46 -- # : 0 00:15:50.468 10:12:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:50.468 10:12:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:50.468 10:12:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:50.468 10:12:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.468 10:12:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.468 10:12:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:50.468 10:12:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:50.468 10:12:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:50.468 10:12:23 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:50.468 10:12:23 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:15:50.468 10:12:23 -- target/abort.sh@14 -- # nvmftestinit 00:15:50.468 10:12:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:50.468 10:12:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:50.468 10:12:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:50.468 10:12:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:50.468 10:12:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:50.468 10:12:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.468 10:12:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.468 10:12:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.468 10:12:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:50.468 10:12:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:50.468 10:12:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:50.468 10:12:23 -- common/autotest_common.sh@10 -- # set +x 00:15:55.860 10:12:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:55.860 10:12:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:55.860 10:12:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:55.860 10:12:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:55.860 10:12:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:55.860 10:12:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:55.860 10:12:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:55.860 10:12:28 -- nvmf/common.sh@294 -- # net_devs=() 00:15:55.860 10:12:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:55.860 10:12:28 -- nvmf/common.sh@295 -- # e810=() 00:15:55.860 10:12:28 -- nvmf/common.sh@295 -- # local -ga e810 00:15:55.860 10:12:28 -- nvmf/common.sh@296 -- # x722=() 00:15:55.860 10:12:28 -- nvmf/common.sh@296 -- # local -ga x722 00:15:55.860 10:12:28 -- nvmf/common.sh@297 -- # mlx=() 00:15:55.860 10:12:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:55.860 10:12:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:55.860 10:12:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:55.860 10:12:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:55.860 10:12:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:55.860 10:12:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:55.860 10:12:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:55.860 10:12:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:55.860 10:12:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:55.860 10:12:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:55.860 10:12:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:55.860 10:12:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:55.860 10:12:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:55.860 10:12:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:55.860 10:12:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:55.860 10:12:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:55.860 10:12:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:55.860 10:12:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:55.860 10:12:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:55.860 10:12:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:55.860 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:55.860 10:12:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:55.860 10:12:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:55.860 10:12:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.860 10:12:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.860 10:12:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:55.860 10:12:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:55.860 10:12:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:55.860 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:55.860 10:12:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:55.860 10:12:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:55.860 10:12:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.860 10:12:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.861 10:12:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:55.861 10:12:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:55.861 10:12:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:55.861 10:12:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:55.861 10:12:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:55.861 10:12:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.861 10:12:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:55.861 10:12:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.861 10:12:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:55.861 Found net devices under 0000:af:00.0: cvl_0_0 00:15:55.861 10:12:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.861 10:12:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:55.861 10:12:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.861 10:12:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:55.861 10:12:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.861 10:12:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:55.861 Found net devices under 0000:af:00.1: cvl_0_1 00:15:55.861 10:12:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.861 10:12:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:55.861 10:12:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:55.861 10:12:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:55.861 10:12:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:55.861 10:12:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:55.861 10:12:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.861 10:12:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:55.861 10:12:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:55.861 10:12:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:55.861 10:12:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:55.861 10:12:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:55.861 10:12:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:55.861 10:12:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:55.861 10:12:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.861 10:12:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:55.861 10:12:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:55.861 10:12:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:55.861 10:12:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:55.861 10:12:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:55.861 10:12:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:55.861 10:12:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:55.861 10:12:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:55.861 10:12:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:55.861 10:12:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:55.861 10:12:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:55.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:15:55.861 00:15:55.861 --- 10.0.0.2 ping statistics --- 00:15:55.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.861 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:15:55.861 10:12:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:55.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:15:55.861 00:15:55.861 --- 10.0.0.1 ping statistics --- 00:15:55.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.861 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:15:55.861 10:12:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.861 10:12:29 -- nvmf/common.sh@410 -- # return 0 00:15:55.861 10:12:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:55.861 10:12:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.861 10:12:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:55.861 10:12:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:55.861 10:12:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.861 10:12:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:55.861 10:12:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:55.861 10:12:29 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:15:55.861 10:12:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:55.861 10:12:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:55.861 10:12:29 -- common/autotest_common.sh@10 -- # set +x 00:15:55.861 10:12:29 -- nvmf/common.sh@469 -- # nvmfpid=3387188 00:15:55.861 10:12:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:55.861 10:12:29 -- nvmf/common.sh@470 -- # waitforlisten 3387188 00:15:55.861 10:12:29 -- common/autotest_common.sh@819 -- # '[' -z 3387188 ']' 00:15:55.861 10:12:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.861 10:12:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:55.861 10:12:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.861 10:12:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:55.861 10:12:29 -- common/autotest_common.sh@10 -- # set +x 00:15:55.861 [2024-04-17 10:12:29.170990] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:55.861 [2024-04-17 10:12:29.171047] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.119 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.119 [2024-04-17 10:12:29.250565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:56.119 [2024-04-17 10:12:29.338706] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:56.119 [2024-04-17 10:12:29.338849] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.119 [2024-04-17 10:12:29.338862] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.119 [2024-04-17 10:12:29.338871] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.119 [2024-04-17 10:12:29.338979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:56.119 [2024-04-17 10:12:29.339089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:56.119 [2024-04-17 10:12:29.339090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.053 10:12:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:57.053 10:12:30 -- common/autotest_common.sh@852 -- # return 0 00:15:57.053 10:12:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:57.053 10:12:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:57.053 10:12:30 -- common/autotest_common.sh@10 -- # set +x 00:15:57.053 10:12:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:57.053 10:12:30 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:15:57.053 10:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:57.053 10:12:30 -- common/autotest_common.sh@10 -- # set +x 00:15:57.053 [2024-04-17 10:12:30.154304] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.053 10:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:57.053 10:12:30 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:15:57.053 10:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:57.053 10:12:30 -- common/autotest_common.sh@10 -- # set +x 00:15:57.053 Malloc0 00:15:57.053 10:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:57.053 10:12:30 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:57.053 10:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:57.053 10:12:30 -- common/autotest_common.sh@10 -- # set +x 00:15:57.053 Delay0 00:15:57.053 10:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:57.053 10:12:30 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:57.053 10:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:57.053 10:12:30 -- common/autotest_common.sh@10 -- # set +x 00:15:57.053 10:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:57.053 10:12:30 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:15:57.053 10:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:57.053 10:12:30 -- common/autotest_common.sh@10 -- # set +x 00:15:57.053 10:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:57.053 10:12:30 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:57.053 10:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:57.053 10:12:30 -- common/autotest_common.sh@10 -- # set +x 00:15:57.053 [2024-04-17 10:12:30.226916] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.053 10:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:57.053 10:12:30 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:57.053 10:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:57.053 10:12:30 -- common/autotest_common.sh@10 -- # set +x 00:15:57.053 10:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:57.053 10:12:30 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:15:57.053 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.054 [2024-04-17 10:12:30.347577] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:59.582 Initializing NVMe Controllers 00:15:59.582 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:59.582 controller IO queue size 128 less than required 00:15:59.582 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:15:59.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:15:59.582 Initialization complete. Launching workers. 00:15:59.582 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 29091 00:15:59.582 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29156, failed to submit 62 00:15:59.582 success 29091, unsuccess 65, failed 0 00:15:59.582 10:12:32 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:59.582 10:12:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:59.582 10:12:32 -- common/autotest_common.sh@10 -- # set +x 00:15:59.582 10:12:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:59.582 10:12:32 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:59.582 10:12:32 -- target/abort.sh@38 -- # nvmftestfini 00:15:59.582 10:12:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:59.582 10:12:32 -- nvmf/common.sh@116 -- # sync 00:15:59.582 10:12:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:59.582 10:12:32 -- nvmf/common.sh@119 -- # set +e 00:15:59.582 10:12:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:59.582 10:12:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:59.582 rmmod nvme_tcp 00:15:59.582 rmmod nvme_fabrics 00:15:59.582 rmmod nvme_keyring 00:15:59.582 10:12:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:59.582 10:12:32 -- nvmf/common.sh@123 -- # set -e 00:15:59.582 10:12:32 -- nvmf/common.sh@124 -- # return 0 00:15:59.582 10:12:32 -- nvmf/common.sh@477 -- # '[' -n 3387188 ']' 00:15:59.582 10:12:32 -- nvmf/common.sh@478 -- # killprocess 3387188 00:15:59.582 10:12:32 -- common/autotest_common.sh@926 -- # '[' -z 3387188 ']' 00:15:59.582 10:12:32 -- common/autotest_common.sh@930 -- # kill -0 3387188 00:15:59.582 10:12:32 -- common/autotest_common.sh@931 -- # uname 00:15:59.582 10:12:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:59.582 10:12:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3387188 00:15:59.582 10:12:32 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:59.582 10:12:32 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:59.582 10:12:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3387188' 00:15:59.582 killing process with pid 3387188 00:15:59.582 10:12:32 -- common/autotest_common.sh@945 -- # kill 3387188 00:15:59.582 10:12:32 -- common/autotest_common.sh@950 -- # wait 3387188 00:15:59.582 10:12:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:59.582 10:12:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:59.582 10:12:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:59.582 10:12:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:59.582 10:12:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:59.582 10:12:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.582 10:12:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.582 10:12:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.115 10:12:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:02.115 00:16:02.115 real 0m11.630s 00:16:02.115 user 0m13.880s 00:16:02.115 sys 0m5.230s 00:16:02.115 10:12:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:02.115 10:12:34 -- common/autotest_common.sh@10 -- # set +x 00:16:02.115 ************************************ 00:16:02.115 END TEST nvmf_abort 00:16:02.115 ************************************ 00:16:02.115 10:12:34 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:16:02.115 10:12:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:02.115 10:12:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:02.115 10:12:34 -- common/autotest_common.sh@10 -- # set +x 00:16:02.115 ************************************ 00:16:02.115 START TEST nvmf_ns_hotplug_stress 00:16:02.115 ************************************ 00:16:02.115 10:12:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:16:02.115 * Looking for test storage... 00:16:02.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:02.115 10:12:35 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:02.115 10:12:35 -- nvmf/common.sh@7 -- # uname -s 00:16:02.115 10:12:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:02.115 10:12:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:02.115 10:12:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:02.115 10:12:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:02.115 10:12:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:02.115 10:12:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:02.115 10:12:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:02.115 10:12:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:02.115 10:12:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:02.115 10:12:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:02.115 10:12:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:02.115 10:12:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:02.115 10:12:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:02.115 10:12:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:02.115 10:12:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:02.115 10:12:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:02.115 10:12:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.115 10:12:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.115 10:12:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.115 10:12:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.116 10:12:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.116 10:12:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.116 10:12:35 -- paths/export.sh@5 -- # export PATH 00:16:02.116 10:12:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.116 10:12:35 -- nvmf/common.sh@46 -- # : 0 00:16:02.116 10:12:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:02.116 10:12:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:02.116 10:12:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:02.116 10:12:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:02.116 10:12:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:02.116 10:12:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:02.116 10:12:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:02.116 10:12:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:02.116 10:12:35 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:02.116 10:12:35 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:16:02.116 10:12:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:02.116 10:12:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:02.116 10:12:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:02.116 10:12:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:02.116 10:12:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:02.116 10:12:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.116 10:12:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.116 10:12:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.116 10:12:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:02.116 10:12:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:02.116 10:12:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:02.116 10:12:35 -- common/autotest_common.sh@10 -- # set +x 00:16:07.380 10:12:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:07.380 10:12:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:07.380 10:12:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:07.380 10:12:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:07.380 10:12:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:07.380 10:12:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:07.380 10:12:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:07.380 10:12:40 -- nvmf/common.sh@294 -- # net_devs=() 00:16:07.380 10:12:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:07.380 10:12:40 -- nvmf/common.sh@295 -- # e810=() 00:16:07.380 10:12:40 -- nvmf/common.sh@295 -- # local -ga e810 00:16:07.380 10:12:40 -- nvmf/common.sh@296 -- # x722=() 00:16:07.380 10:12:40 -- nvmf/common.sh@296 -- # local -ga x722 00:16:07.380 10:12:40 -- nvmf/common.sh@297 -- # mlx=() 00:16:07.380 10:12:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:07.380 10:12:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:07.380 10:12:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:07.380 10:12:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:07.380 10:12:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:07.380 10:12:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:07.380 10:12:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:07.380 10:12:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:07.380 10:12:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:07.380 10:12:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:07.380 10:12:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:07.381 10:12:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:07.381 10:12:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:07.381 10:12:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:07.381 10:12:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:07.381 10:12:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:07.381 10:12:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:07.381 10:12:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:07.381 10:12:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:07.381 10:12:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:07.381 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:07.381 10:12:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:07.381 10:12:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:07.381 10:12:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.381 10:12:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.381 10:12:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:07.381 10:12:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:07.381 10:12:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:07.381 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:07.381 10:12:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:07.381 10:12:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:07.381 10:12:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.381 10:12:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.381 10:12:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:07.381 10:12:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:07.381 10:12:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:07.381 10:12:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:07.381 10:12:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:07.381 10:12:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.381 10:12:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:07.381 10:12:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.381 10:12:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:07.381 Found net devices under 0000:af:00.0: cvl_0_0 00:16:07.381 10:12:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.381 10:12:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:07.381 10:12:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.381 10:12:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:07.381 10:12:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.381 10:12:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:07.381 Found net devices under 0000:af:00.1: cvl_0_1 00:16:07.381 10:12:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.381 10:12:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:07.381 10:12:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:07.381 10:12:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:07.381 10:12:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:07.381 10:12:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:07.381 10:12:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:07.381 10:12:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:07.381 10:12:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:07.381 10:12:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:07.381 10:12:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:07.381 10:12:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:07.381 10:12:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:07.381 10:12:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:07.381 10:12:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:07.381 10:12:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:07.381 10:12:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:07.381 10:12:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:07.381 10:12:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:07.381 10:12:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:07.381 10:12:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:07.381 10:12:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:07.381 10:12:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:07.639 10:12:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:07.639 10:12:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:07.639 10:12:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:07.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:07.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:16:07.639 00:16:07.639 --- 10.0.0.2 ping statistics --- 00:16:07.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.639 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:16:07.639 10:12:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:07.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:07.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:16:07.639 00:16:07.639 --- 10.0.0.1 ping statistics --- 00:16:07.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.639 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:16:07.639 10:12:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:07.639 10:12:40 -- nvmf/common.sh@410 -- # return 0 00:16:07.639 10:12:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:07.639 10:12:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:07.639 10:12:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:07.639 10:12:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:07.639 10:12:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:07.639 10:12:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:07.639 10:12:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:07.639 10:12:40 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:16:07.639 10:12:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:07.639 10:12:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:07.639 10:12:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.639 10:12:40 -- nvmf/common.sh@469 -- # nvmfpid=3391484 00:16:07.639 10:12:40 -- nvmf/common.sh@470 -- # waitforlisten 3391484 00:16:07.639 10:12:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:07.639 10:12:40 -- common/autotest_common.sh@819 -- # '[' -z 3391484 ']' 00:16:07.639 10:12:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.639 10:12:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:07.639 10:12:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.639 10:12:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:07.639 10:12:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.639 [2024-04-17 10:12:40.890633] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:07.639 [2024-04-17 10:12:40.890687] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.639 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.639 [2024-04-17 10:12:40.955370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:07.896 [2024-04-17 10:12:41.042880] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:07.896 [2024-04-17 10:12:41.043027] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.896 [2024-04-17 10:12:41.043039] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.896 [2024-04-17 10:12:41.043048] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.896 [2024-04-17 10:12:41.043095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.896 [2024-04-17 10:12:41.043208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:07.896 [2024-04-17 10:12:41.043209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.828 10:12:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:08.828 10:12:41 -- common/autotest_common.sh@852 -- # return 0 00:16:08.828 10:12:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:08.828 10:12:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:08.828 10:12:41 -- common/autotest_common.sh@10 -- # set +x 00:16:08.828 10:12:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:08.828 10:12:41 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:16:08.828 10:12:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:08.828 [2024-04-17 10:12:42.086673] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:08.828 10:12:42 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:09.086 10:12:42 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:09.344 [2024-04-17 10:12:42.581712] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:09.344 10:12:42 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:09.602 10:12:42 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:16:09.860 Malloc0 00:16:09.860 10:12:43 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:10.118 Delay0 00:16:10.118 10:12:43 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:10.376 10:12:43 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:16:10.634 NULL1 00:16:10.634 10:12:43 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:10.893 10:12:44 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=3392052 00:16:10.893 10:12:44 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:16:10.893 10:12:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:10.893 10:12:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:10.893 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.151 10:12:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:11.408 10:12:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:16:11.408 10:12:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:16:11.666 true 00:16:11.666 10:12:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:11.666 10:12:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:11.924 10:12:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:12.182 10:12:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:16:12.182 10:12:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:16:12.440 true 00:16:12.440 10:12:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:12.440 10:12:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:13.374 Read completed with error (sct=0, sc=11) 00:16:13.374 10:12:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:13.374 10:12:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:16:13.374 10:12:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:16:13.632 true 00:16:13.632 10:12:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:13.632 10:12:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:13.889 10:12:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:14.147 10:12:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:16:14.147 10:12:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:16:14.405 true 00:16:14.405 10:12:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:14.405 10:12:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:15.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:15.340 10:12:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:15.598 10:12:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:16:15.598 10:12:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:16:15.857 true 00:16:15.857 10:12:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:15.857 10:12:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:16.115 10:12:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:16.373 10:12:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:16:16.373 10:12:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:16:16.631 true 00:16:16.631 10:12:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:16.631 10:12:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:17.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:17.564 10:12:50 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:17.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:17.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:17.822 10:12:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:16:17.822 10:12:50 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:16:17.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.080 true 00:16:18.080 10:12:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:18.080 10:12:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:18.338 10:12:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:18.596 10:12:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:16:18.596 10:12:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:16:18.596 true 00:16:18.596 10:12:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:18.596 10:12:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:19.969 10:12:52 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:19.969 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.969 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.969 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.969 10:12:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:16:19.969 10:12:53 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:16:19.969 true 00:16:19.969 10:12:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:19.969 10:12:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:20.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.902 10:12:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:21.160 10:12:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:16:21.160 10:12:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:16:21.418 true 00:16:21.418 10:12:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:21.418 10:12:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:21.677 10:12:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:21.935 10:12:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:16:21.935 10:12:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:16:21.935 true 00:16:22.193 10:12:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:22.193 10:12:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:23.128 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.128 10:12:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:23.128 10:12:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:16:23.128 10:12:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:16:23.386 true 00:16:23.386 10:12:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:23.386 10:12:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:23.645 10:12:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:23.903 10:12:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:16:23.903 10:12:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:16:24.161 true 00:16:24.161 10:12:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:24.161 10:12:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:25.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.095 10:12:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:25.353 10:12:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:16:25.353 10:12:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:16:25.611 true 00:16:25.611 10:12:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:25.611 10:12:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:25.868 10:12:59 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:26.126 10:12:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:16:26.126 10:12:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:16:26.384 true 00:16:26.384 10:12:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:26.384 10:12:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.319 10:13:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:27.319 10:13:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:16:27.319 10:13:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:16:27.577 true 00:16:27.577 10:13:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:27.577 10:13:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.835 10:13:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:28.093 10:13:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:16:28.093 10:13:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:16:28.352 true 00:16:28.352 10:13:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:28.352 10:13:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:29.286 10:13:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:29.544 10:13:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:16:29.544 10:13:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:16:29.802 true 00:16:29.802 10:13:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:29.802 10:13:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:30.060 10:13:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:30.343 10:13:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:16:30.343 10:13:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:16:30.343 true 00:16:30.343 10:13:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:30.343 10:13:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:31.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:31.300 10:13:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:31.558 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:31.558 10:13:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:16:31.558 10:13:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:16:31.817 true 00:16:31.817 10:13:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:31.817 10:13:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:32.075 10:13:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:32.333 10:13:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:16:32.333 10:13:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:16:32.591 true 00:16:32.591 10:13:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:32.591 10:13:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:33.525 10:13:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:33.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:33.783 10:13:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:16:33.783 10:13:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:16:34.040 true 00:16:34.040 10:13:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:34.040 10:13:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:34.298 10:13:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:34.556 10:13:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:16:34.556 10:13:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:16:34.815 true 00:16:34.815 10:13:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:34.815 10:13:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:35.749 10:13:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:35.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:35.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:36.006 10:13:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:16:36.006 10:13:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:16:36.006 true 00:16:36.006 10:13:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:36.006 10:13:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:36.263 10:13:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:36.521 10:13:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:16:36.521 10:13:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:16:36.778 true 00:16:36.778 10:13:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:36.778 10:13:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:37.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:37.712 10:13:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:37.969 10:13:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:16:37.970 10:13:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:16:38.227 true 00:16:38.227 10:13:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:38.227 10:13:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:38.485 10:13:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:38.743 10:13:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:16:38.743 10:13:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:16:39.000 true 00:16:39.000 10:13:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:39.000 10:13:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:39.937 10:13:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:40.195 10:13:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:16:40.195 10:13:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:16:40.453 true 00:16:40.453 10:13:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:40.453 10:13:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:40.711 10:13:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:40.970 10:13:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:16:40.970 10:13:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:16:40.970 true 00:16:41.228 10:13:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:41.228 10:13:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:42.165 Initializing NVMe Controllers 00:16:42.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:42.165 Controller IO queue size 128, less than required. 00:16:42.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:42.165 Controller IO queue size 128, less than required. 00:16:42.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:42.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:42.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:42.165 Initialization complete. Launching workers. 00:16:42.165 ======================================================== 00:16:42.165 Latency(us) 00:16:42.165 Device Information : IOPS MiB/s Average min max 00:16:42.165 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 646.46 0.32 111466.77 2917.87 1081831.05 00:16:42.165 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15305.02 7.47 8331.98 1788.84 553363.83 00:16:42.165 ======================================================== 00:16:42.165 Total : 15951.48 7.79 12511.71 1788.84 1081831.05 00:16:42.165 00:16:42.165 10:13:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:42.165 10:13:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:16:42.165 10:13:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:16:42.424 true 00:16:42.424 10:13:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3392052 00:16:42.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (3392052) - No such process 00:16:42.424 10:13:15 -- target/ns_hotplug_stress.sh@44 -- # wait 3392052 00:16:42.424 10:13:15 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:42.424 10:13:15 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:16:42.424 10:13:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:42.424 10:13:15 -- nvmf/common.sh@116 -- # sync 00:16:42.424 10:13:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:42.424 10:13:15 -- nvmf/common.sh@119 -- # set +e 00:16:42.424 10:13:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:42.424 10:13:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:42.424 rmmod nvme_tcp 00:16:42.424 rmmod nvme_fabrics 00:16:42.424 rmmod nvme_keyring 00:16:42.424 10:13:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:42.424 10:13:15 -- nvmf/common.sh@123 -- # set -e 00:16:42.424 10:13:15 -- nvmf/common.sh@124 -- # return 0 00:16:42.424 10:13:15 -- nvmf/common.sh@477 -- # '[' -n 3391484 ']' 00:16:42.424 10:13:15 -- nvmf/common.sh@478 -- # killprocess 3391484 00:16:42.424 10:13:15 -- common/autotest_common.sh@926 -- # '[' -z 3391484 ']' 00:16:42.424 10:13:15 -- common/autotest_common.sh@930 -- # kill -0 3391484 00:16:42.424 10:13:15 -- common/autotest_common.sh@931 -- # uname 00:16:42.424 10:13:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:42.424 10:13:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3391484 00:16:42.424 10:13:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:42.424 10:13:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:42.424 10:13:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3391484' 00:16:42.424 killing process with pid 3391484 00:16:42.424 10:13:15 -- common/autotest_common.sh@945 -- # kill 3391484 00:16:42.424 10:13:15 -- common/autotest_common.sh@950 -- # wait 3391484 00:16:42.682 10:13:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:42.682 10:13:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:42.682 10:13:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:42.682 10:13:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:42.682 10:13:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:42.682 10:13:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.682 10:13:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.682 10:13:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.215 10:13:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:45.215 00:16:45.215 real 0m43.107s 00:16:45.215 user 2m36.152s 00:16:45.215 sys 0m10.264s 00:16:45.215 10:13:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:45.215 10:13:18 -- common/autotest_common.sh@10 -- # set +x 00:16:45.215 ************************************ 00:16:45.215 END TEST nvmf_ns_hotplug_stress 00:16:45.215 ************************************ 00:16:45.215 10:13:18 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:45.215 10:13:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:45.215 10:13:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:45.215 10:13:18 -- common/autotest_common.sh@10 -- # set +x 00:16:45.215 ************************************ 00:16:45.215 START TEST nvmf_connect_stress 00:16:45.215 ************************************ 00:16:45.215 10:13:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:45.215 * Looking for test storage... 00:16:45.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:45.215 10:13:18 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:45.215 10:13:18 -- nvmf/common.sh@7 -- # uname -s 00:16:45.215 10:13:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.215 10:13:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.215 10:13:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.215 10:13:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.215 10:13:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.215 10:13:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.215 10:13:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.215 10:13:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.215 10:13:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.215 10:13:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.215 10:13:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:45.215 10:13:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:45.215 10:13:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.215 10:13:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.215 10:13:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:45.215 10:13:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:45.215 10:13:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.215 10:13:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.215 10:13:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.215 10:13:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.215 10:13:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.215 10:13:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.215 10:13:18 -- paths/export.sh@5 -- # export PATH 00:16:45.216 10:13:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.216 10:13:18 -- nvmf/common.sh@46 -- # : 0 00:16:45.216 10:13:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:45.216 10:13:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:45.216 10:13:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:45.216 10:13:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.216 10:13:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.216 10:13:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:45.216 10:13:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:45.216 10:13:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:45.216 10:13:18 -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:45.216 10:13:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:45.216 10:13:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.216 10:13:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:45.216 10:13:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:45.216 10:13:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:45.216 10:13:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.216 10:13:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.216 10:13:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.216 10:13:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:45.216 10:13:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:45.216 10:13:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:45.216 10:13:18 -- common/autotest_common.sh@10 -- # set +x 00:16:50.486 10:13:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:50.486 10:13:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:50.486 10:13:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:50.486 10:13:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:50.486 10:13:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:50.486 10:13:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:50.486 10:13:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:50.486 10:13:23 -- nvmf/common.sh@294 -- # net_devs=() 00:16:50.486 10:13:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:50.486 10:13:23 -- nvmf/common.sh@295 -- # e810=() 00:16:50.486 10:13:23 -- nvmf/common.sh@295 -- # local -ga e810 00:16:50.486 10:13:23 -- nvmf/common.sh@296 -- # x722=() 00:16:50.486 10:13:23 -- nvmf/common.sh@296 -- # local -ga x722 00:16:50.486 10:13:23 -- nvmf/common.sh@297 -- # mlx=() 00:16:50.486 10:13:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:50.486 10:13:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:50.486 10:13:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:50.486 10:13:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:50.486 10:13:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:50.486 10:13:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:50.486 10:13:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:50.486 10:13:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:50.486 10:13:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:50.486 10:13:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:50.486 10:13:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:50.486 10:13:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:50.486 10:13:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:50.486 10:13:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:50.486 10:13:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:50.486 10:13:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:50.486 10:13:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:50.486 10:13:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:50.486 10:13:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:50.486 10:13:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:50.486 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:50.486 10:13:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:50.486 10:13:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:50.486 10:13:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:50.486 10:13:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:50.486 10:13:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:50.486 10:13:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:50.486 10:13:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:50.486 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:50.486 10:13:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:50.486 10:13:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:50.486 10:13:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:50.486 10:13:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:50.486 10:13:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:50.486 10:13:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:50.486 10:13:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:50.486 10:13:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:50.486 10:13:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:50.486 10:13:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.486 10:13:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:50.486 10:13:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.486 10:13:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:50.486 Found net devices under 0000:af:00.0: cvl_0_0 00:16:50.486 10:13:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.486 10:13:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:50.486 10:13:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.486 10:13:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:50.486 10:13:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.486 10:13:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:50.486 Found net devices under 0000:af:00.1: cvl_0_1 00:16:50.486 10:13:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.486 10:13:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:50.486 10:13:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:50.486 10:13:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:50.486 10:13:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:50.486 10:13:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:50.486 10:13:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:50.486 10:13:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:50.486 10:13:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:50.486 10:13:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:50.486 10:13:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:50.486 10:13:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:50.486 10:13:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:50.486 10:13:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:50.486 10:13:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:50.486 10:13:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:50.486 10:13:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:50.486 10:13:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:50.486 10:13:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:50.486 10:13:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:50.486 10:13:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:50.486 10:13:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:50.486 10:13:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:50.486 10:13:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:50.486 10:13:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:50.486 10:13:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:50.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:50.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:16:50.486 00:16:50.486 --- 10.0.0.2 ping statistics --- 00:16:50.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.486 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:16:50.486 10:13:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:50.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:50.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:16:50.486 00:16:50.486 --- 10.0.0.1 ping statistics --- 00:16:50.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.486 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:16:50.486 10:13:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:50.486 10:13:23 -- nvmf/common.sh@410 -- # return 0 00:16:50.486 10:13:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:50.486 10:13:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:50.487 10:13:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:50.487 10:13:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:50.487 10:13:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:50.487 10:13:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:50.487 10:13:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:50.487 10:13:23 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:50.487 10:13:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:50.487 10:13:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:50.487 10:13:23 -- common/autotest_common.sh@10 -- # set +x 00:16:50.487 10:13:23 -- nvmf/common.sh@469 -- # nvmfpid=3401502 00:16:50.487 10:13:23 -- nvmf/common.sh@470 -- # waitforlisten 3401502 00:16:50.487 10:13:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:50.487 10:13:23 -- common/autotest_common.sh@819 -- # '[' -z 3401502 ']' 00:16:50.487 10:13:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.487 10:13:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:50.487 10:13:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.487 10:13:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:50.487 10:13:23 -- common/autotest_common.sh@10 -- # set +x 00:16:50.746 [2024-04-17 10:13:23.825992] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:50.746 [2024-04-17 10:13:23.826045] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.746 EAL: No free 2048 kB hugepages reported on node 1 00:16:50.746 [2024-04-17 10:13:23.904687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:50.746 [2024-04-17 10:13:23.992930] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:50.746 [2024-04-17 10:13:23.993074] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:50.746 [2024-04-17 10:13:23.993086] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:50.746 [2024-04-17 10:13:23.993095] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:50.746 [2024-04-17 10:13:23.993215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:50.746 [2024-04-17 10:13:23.993329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:50.746 [2024-04-17 10:13:23.993329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.682 10:13:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:51.682 10:13:24 -- common/autotest_common.sh@852 -- # return 0 00:16:51.682 10:13:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:51.682 10:13:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:51.682 10:13:24 -- common/autotest_common.sh@10 -- # set +x 00:16:51.682 10:13:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.682 10:13:24 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:51.682 10:13:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:51.682 10:13:24 -- common/autotest_common.sh@10 -- # set +x 00:16:51.682 [2024-04-17 10:13:24.804660] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.682 10:13:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:51.682 10:13:24 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:51.682 10:13:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:51.682 10:13:24 -- common/autotest_common.sh@10 -- # set +x 00:16:51.682 10:13:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:51.682 10:13:24 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:51.682 10:13:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:51.682 10:13:24 -- common/autotest_common.sh@10 -- # set +x 00:16:51.682 [2024-04-17 10:13:24.842787] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.682 10:13:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:51.682 10:13:24 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:51.682 10:13:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:51.682 10:13:24 -- common/autotest_common.sh@10 -- # set +x 00:16:51.682 NULL1 00:16:51.682 10:13:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:51.682 10:13:24 -- target/connect_stress.sh@21 -- # PERF_PID=3401724 00:16:51.682 10:13:24 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:51.682 10:13:24 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:51.682 10:13:24 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:51.682 10:13:24 -- target/connect_stress.sh@27 -- # seq 1 20 00:16:51.682 10:13:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:51.682 10:13:24 -- target/connect_stress.sh@28 -- # cat 00:16:51.682 10:13:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:51.682 10:13:24 -- target/connect_stress.sh@28 -- # cat 00:16:51.682 10:13:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:51.682 10:13:24 -- target/connect_stress.sh@28 -- # cat 00:16:51.682 10:13:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:51.682 10:13:24 -- target/connect_stress.sh@28 -- # cat 00:16:51.682 10:13:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:51.682 10:13:24 -- target/connect_stress.sh@28 -- # cat 00:16:51.682 10:13:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:51.682 10:13:24 -- target/connect_stress.sh@28 -- # cat 00:16:51.682 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.682 10:13:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:51.682 10:13:24 -- target/connect_stress.sh@28 -- # cat 00:16:51.682 10:13:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:51.682 10:13:24 -- target/connect_stress.sh@28 -- # cat 00:16:51.682 10:13:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:51.682 10:13:24 -- target/connect_stress.sh@28 -- # cat 00:16:51.682 10:13:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:51.682 10:13:24 -- target/connect_stress.sh@28 -- # cat 00:16:51.682 10:13:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:51.682 10:13:24 -- target/connect_stress.sh@28 -- # cat 00:16:51.682 10:13:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:51.682 10:13:24 -- target/connect_stress.sh@28 -- # cat 00:16:51.682 10:13:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:51.682 10:13:24 -- target/connect_stress.sh@28 -- # cat 00:16:51.682 10:13:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:51.682 10:13:24 -- target/connect_stress.sh@28 -- # cat 00:16:51.682 10:13:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:51.682 10:13:24 -- target/connect_stress.sh@28 -- # cat 00:16:51.683 10:13:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:51.683 10:13:24 -- target/connect_stress.sh@28 -- # cat 00:16:51.683 10:13:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:51.683 10:13:24 -- target/connect_stress.sh@28 -- # cat 00:16:51.683 10:13:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:51.683 10:13:24 -- target/connect_stress.sh@28 -- # cat 00:16:51.683 10:13:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:51.683 10:13:24 -- target/connect_stress.sh@28 -- # cat 00:16:51.683 10:13:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:51.683 10:13:24 -- target/connect_stress.sh@28 -- # cat 00:16:51.683 10:13:24 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:16:51.683 10:13:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.683 10:13:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:51.683 10:13:24 -- common/autotest_common.sh@10 -- # set +x 00:16:51.940 10:13:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:51.940 10:13:25 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:16:51.940 10:13:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:51.940 10:13:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:51.940 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:16:52.506 10:13:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:52.506 10:13:25 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:16:52.506 10:13:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.506 10:13:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:52.506 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:16:52.764 10:13:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:52.764 10:13:25 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:16:52.764 10:13:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:52.764 10:13:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:52.764 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:16:53.022 10:13:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:53.022 10:13:26 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:16:53.022 10:13:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.022 10:13:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:53.022 10:13:26 -- common/autotest_common.sh@10 -- # set +x 00:16:53.280 10:13:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:53.280 10:13:26 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:16:53.280 10:13:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.280 10:13:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:53.280 10:13:26 -- common/autotest_common.sh@10 -- # set +x 00:16:53.847 10:13:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:53.847 10:13:26 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:16:53.847 10:13:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:53.847 10:13:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:53.847 10:13:26 -- common/autotest_common.sh@10 -- # set +x 00:16:54.105 10:13:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:54.105 10:13:27 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:16:54.105 10:13:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.105 10:13:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:54.105 10:13:27 -- common/autotest_common.sh@10 -- # set +x 00:16:54.363 10:13:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:54.363 10:13:27 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:16:54.363 10:13:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.363 10:13:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:54.363 10:13:27 -- common/autotest_common.sh@10 -- # set +x 00:16:54.622 10:13:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:54.622 10:13:27 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:16:54.622 10:13:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.622 10:13:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:54.622 10:13:27 -- common/autotest_common.sh@10 -- # set +x 00:16:54.880 10:13:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:54.880 10:13:28 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:16:54.880 10:13:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:54.880 10:13:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:54.880 10:13:28 -- common/autotest_common.sh@10 -- # set +x 00:16:55.446 10:13:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:55.446 10:13:28 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:16:55.446 10:13:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.446 10:13:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:55.446 10:13:28 -- common/autotest_common.sh@10 -- # set +x 00:16:55.704 10:13:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:55.704 10:13:28 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:16:55.704 10:13:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.704 10:13:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:55.705 10:13:28 -- common/autotest_common.sh@10 -- # set +x 00:16:55.963 10:13:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:55.963 10:13:29 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:16:55.963 10:13:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:55.963 10:13:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:55.963 10:13:29 -- common/autotest_common.sh@10 -- # set +x 00:16:56.222 10:13:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:56.222 10:13:29 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:16:56.222 10:13:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.222 10:13:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:56.222 10:13:29 -- common/autotest_common.sh@10 -- # set +x 00:16:56.480 10:13:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:56.480 10:13:29 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:16:56.480 10:13:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:56.480 10:13:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:56.480 10:13:29 -- common/autotest_common.sh@10 -- # set +x 00:16:57.046 10:13:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:57.046 10:13:30 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:16:57.046 10:13:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.046 10:13:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:57.046 10:13:30 -- common/autotest_common.sh@10 -- # set +x 00:16:57.304 10:13:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:57.304 10:13:30 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:16:57.304 10:13:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.304 10:13:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:57.304 10:13:30 -- common/autotest_common.sh@10 -- # set +x 00:16:57.562 10:13:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:57.562 10:13:30 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:16:57.562 10:13:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.562 10:13:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:57.562 10:13:30 -- common/autotest_common.sh@10 -- # set +x 00:16:57.820 10:13:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:57.820 10:13:31 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:16:57.820 10:13:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:57.820 10:13:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:57.820 10:13:31 -- common/autotest_common.sh@10 -- # set +x 00:16:58.078 10:13:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:58.078 10:13:31 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:16:58.078 10:13:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.078 10:13:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:58.079 10:13:31 -- common/autotest_common.sh@10 -- # set +x 00:16:58.645 10:13:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:58.645 10:13:31 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:16:58.645 10:13:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.645 10:13:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:58.645 10:13:31 -- common/autotest_common.sh@10 -- # set +x 00:16:58.903 10:13:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:58.903 10:13:32 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:16:58.903 10:13:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.903 10:13:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:58.903 10:13:32 -- common/autotest_common.sh@10 -- # set +x 00:16:59.161 10:13:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:59.161 10:13:32 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:16:59.161 10:13:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.161 10:13:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:59.161 10:13:32 -- common/autotest_common.sh@10 -- # set +x 00:16:59.419 10:13:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:59.419 10:13:32 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:16:59.419 10:13:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.419 10:13:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:59.419 10:13:32 -- common/autotest_common.sh@10 -- # set +x 00:16:59.986 10:13:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:59.986 10:13:33 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:16:59.986 10:13:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.986 10:13:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:59.986 10:13:33 -- common/autotest_common.sh@10 -- # set +x 00:17:00.244 10:13:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:00.244 10:13:33 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:17:00.244 10:13:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.244 10:13:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:00.244 10:13:33 -- common/autotest_common.sh@10 -- # set +x 00:17:00.502 10:13:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:00.502 10:13:33 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:17:00.502 10:13:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.502 10:13:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:00.502 10:13:33 -- common/autotest_common.sh@10 -- # set +x 00:17:00.760 10:13:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:00.760 10:13:34 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:17:00.760 10:13:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.760 10:13:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:00.760 10:13:34 -- common/autotest_common.sh@10 -- # set +x 00:17:01.019 10:13:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:01.019 10:13:34 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:17:01.019 10:13:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.019 10:13:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:01.019 10:13:34 -- common/autotest_common.sh@10 -- # set +x 00:17:01.585 10:13:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:01.585 10:13:34 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:17:01.585 10:13:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.585 10:13:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:01.585 10:13:34 -- common/autotest_common.sh@10 -- # set +x 00:17:01.844 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:01.844 10:13:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:01.844 10:13:34 -- target/connect_stress.sh@34 -- # kill -0 3401724 00:17:01.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3401724) - No such process 00:17:01.844 10:13:34 -- target/connect_stress.sh@38 -- # wait 3401724 00:17:01.844 10:13:34 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:01.844 10:13:34 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:01.844 10:13:34 -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:01.844 10:13:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:01.844 10:13:34 -- nvmf/common.sh@116 -- # sync 00:17:01.844 10:13:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:01.844 10:13:34 -- nvmf/common.sh@119 -- # set +e 00:17:01.844 10:13:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:01.844 10:13:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:01.844 rmmod nvme_tcp 00:17:01.844 rmmod nvme_fabrics 00:17:01.844 rmmod nvme_keyring 00:17:01.844 10:13:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:01.844 10:13:35 -- nvmf/common.sh@123 -- # set -e 00:17:01.844 10:13:35 -- nvmf/common.sh@124 -- # return 0 00:17:01.844 10:13:35 -- nvmf/common.sh@477 -- # '[' -n 3401502 ']' 00:17:01.844 10:13:35 -- nvmf/common.sh@478 -- # killprocess 3401502 00:17:01.844 10:13:35 -- common/autotest_common.sh@926 -- # '[' -z 3401502 ']' 00:17:01.844 10:13:35 -- common/autotest_common.sh@930 -- # kill -0 3401502 00:17:01.844 10:13:35 -- common/autotest_common.sh@931 -- # uname 00:17:01.844 10:13:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:01.844 10:13:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3401502 00:17:01.844 10:13:35 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:01.844 10:13:35 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:01.844 10:13:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3401502' 00:17:01.844 killing process with pid 3401502 00:17:01.844 10:13:35 -- common/autotest_common.sh@945 -- # kill 3401502 00:17:01.844 10:13:35 -- common/autotest_common.sh@950 -- # wait 3401502 00:17:02.103 10:13:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:02.103 10:13:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:02.103 10:13:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:02.103 10:13:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:02.103 10:13:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:02.103 10:13:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.103 10:13:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.103 10:13:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.093 10:13:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:04.093 00:17:04.093 real 0m19.288s 00:17:04.093 user 0m41.584s 00:17:04.093 sys 0m7.931s 00:17:04.093 10:13:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:04.093 10:13:37 -- common/autotest_common.sh@10 -- # set +x 00:17:04.093 ************************************ 00:17:04.093 END TEST nvmf_connect_stress 00:17:04.093 ************************************ 00:17:04.093 10:13:37 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:04.093 10:13:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:04.093 10:13:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:04.093 10:13:37 -- common/autotest_common.sh@10 -- # set +x 00:17:04.093 ************************************ 00:17:04.093 START TEST nvmf_fused_ordering 00:17:04.093 ************************************ 00:17:04.093 10:13:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:04.352 * Looking for test storage... 00:17:04.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:04.352 10:13:37 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:04.352 10:13:37 -- nvmf/common.sh@7 -- # uname -s 00:17:04.352 10:13:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:04.352 10:13:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:04.352 10:13:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:04.352 10:13:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:04.352 10:13:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:04.352 10:13:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:04.352 10:13:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:04.352 10:13:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:04.352 10:13:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:04.352 10:13:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:04.352 10:13:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:04.352 10:13:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:17:04.352 10:13:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:04.352 10:13:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:04.352 10:13:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:04.352 10:13:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:04.352 10:13:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:04.352 10:13:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:04.352 10:13:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:04.353 10:13:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.353 10:13:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.353 10:13:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.353 10:13:37 -- paths/export.sh@5 -- # export PATH 00:17:04.353 10:13:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.353 10:13:37 -- nvmf/common.sh@46 -- # : 0 00:17:04.353 10:13:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:04.353 10:13:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:04.353 10:13:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:04.353 10:13:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:04.353 10:13:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:04.353 10:13:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:04.353 10:13:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:04.353 10:13:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:04.353 10:13:37 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:04.353 10:13:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:04.353 10:13:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:04.353 10:13:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:04.353 10:13:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:04.353 10:13:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:04.353 10:13:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.353 10:13:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.353 10:13:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.353 10:13:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:04.353 10:13:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:04.353 10:13:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:04.353 10:13:37 -- common/autotest_common.sh@10 -- # set +x 00:17:10.920 10:13:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:10.920 10:13:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:10.920 10:13:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:10.920 10:13:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:10.920 10:13:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:10.920 10:13:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:10.920 10:13:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:10.920 10:13:42 -- nvmf/common.sh@294 -- # net_devs=() 00:17:10.920 10:13:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:10.920 10:13:42 -- nvmf/common.sh@295 -- # e810=() 00:17:10.920 10:13:42 -- nvmf/common.sh@295 -- # local -ga e810 00:17:10.920 10:13:42 -- nvmf/common.sh@296 -- # x722=() 00:17:10.920 10:13:42 -- nvmf/common.sh@296 -- # local -ga x722 00:17:10.920 10:13:42 -- nvmf/common.sh@297 -- # mlx=() 00:17:10.920 10:13:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:10.920 10:13:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:10.920 10:13:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:10.920 10:13:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:10.920 10:13:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:10.920 10:13:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:10.920 10:13:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:10.920 10:13:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:10.920 10:13:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:10.920 10:13:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:10.920 10:13:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:10.920 10:13:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:10.920 10:13:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:10.920 10:13:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:10.920 10:13:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:10.920 10:13:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:10.920 10:13:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:10.920 10:13:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:10.920 10:13:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:10.920 10:13:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:10.920 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:10.920 10:13:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:10.920 10:13:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:10.920 10:13:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.920 10:13:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.920 10:13:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:10.920 10:13:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:10.920 10:13:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:10.920 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:10.920 10:13:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:10.920 10:13:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:10.920 10:13:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.920 10:13:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.920 10:13:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:10.920 10:13:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:10.920 10:13:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:10.920 10:13:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:10.920 10:13:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:10.920 10:13:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.920 10:13:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:10.920 10:13:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.920 10:13:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:10.920 Found net devices under 0000:af:00.0: cvl_0_0 00:17:10.920 10:13:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.920 10:13:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:10.920 10:13:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.920 10:13:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:10.920 10:13:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.920 10:13:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:10.920 Found net devices under 0000:af:00.1: cvl_0_1 00:17:10.920 10:13:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.920 10:13:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:10.920 10:13:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:10.920 10:13:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:10.920 10:13:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:10.920 10:13:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:10.920 10:13:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:10.920 10:13:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:10.920 10:13:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:10.920 10:13:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:10.920 10:13:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:10.920 10:13:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:10.920 10:13:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:10.920 10:13:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:10.920 10:13:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.920 10:13:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:10.920 10:13:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:10.920 10:13:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:10.920 10:13:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:10.920 10:13:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:10.920 10:13:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:10.920 10:13:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:10.920 10:13:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:10.920 10:13:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:10.920 10:13:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:10.920 10:13:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:10.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:17:10.920 00:17:10.920 --- 10.0.0.2 ping statistics --- 00:17:10.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.921 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:17:10.921 10:13:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:10.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:17:10.921 00:17:10.921 --- 10.0.0.1 ping statistics --- 00:17:10.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.921 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:17:10.921 10:13:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.921 10:13:43 -- nvmf/common.sh@410 -- # return 0 00:17:10.921 10:13:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:10.921 10:13:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.921 10:13:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:10.921 10:13:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:10.921 10:13:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.921 10:13:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:10.921 10:13:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:10.921 10:13:43 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:10.921 10:13:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:10.921 10:13:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:10.921 10:13:43 -- common/autotest_common.sh@10 -- # set +x 00:17:10.921 10:13:43 -- nvmf/common.sh@469 -- # nvmfpid=3407338 00:17:10.921 10:13:43 -- nvmf/common.sh@470 -- # waitforlisten 3407338 00:17:10.921 10:13:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:10.921 10:13:43 -- common/autotest_common.sh@819 -- # '[' -z 3407338 ']' 00:17:10.921 10:13:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.921 10:13:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:10.921 10:13:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.921 10:13:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:10.921 10:13:43 -- common/autotest_common.sh@10 -- # set +x 00:17:10.921 [2024-04-17 10:13:43.348672] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:10.921 [2024-04-17 10:13:43.348726] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.921 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.921 [2024-04-17 10:13:43.426584] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.921 [2024-04-17 10:13:43.513059] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:10.921 [2024-04-17 10:13:43.513200] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.921 [2024-04-17 10:13:43.513215] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.921 [2024-04-17 10:13:43.513224] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.921 [2024-04-17 10:13:43.513244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.921 10:13:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:10.921 10:13:44 -- common/autotest_common.sh@852 -- # return 0 00:17:10.921 10:13:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:10.921 10:13:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:10.921 10:13:44 -- common/autotest_common.sh@10 -- # set +x 00:17:10.921 10:13:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.921 10:13:44 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:10.921 10:13:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:10.921 10:13:44 -- common/autotest_common.sh@10 -- # set +x 00:17:10.921 [2024-04-17 10:13:44.230620] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.921 10:13:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:10.921 10:13:44 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:10.921 10:13:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:10.921 10:13:44 -- common/autotest_common.sh@10 -- # set +x 00:17:10.921 10:13:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:10.921 10:13:44 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:10.921 10:13:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:10.921 10:13:44 -- common/autotest_common.sh@10 -- # set +x 00:17:10.921 [2024-04-17 10:13:44.250780] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.178 10:13:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:11.178 10:13:44 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:11.178 10:13:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:11.178 10:13:44 -- common/autotest_common.sh@10 -- # set +x 00:17:11.178 NULL1 00:17:11.178 10:13:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:11.178 10:13:44 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:11.178 10:13:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:11.178 10:13:44 -- common/autotest_common.sh@10 -- # set +x 00:17:11.178 10:13:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:11.178 10:13:44 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:11.178 10:13:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:11.178 10:13:44 -- common/autotest_common.sh@10 -- # set +x 00:17:11.178 10:13:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:11.178 10:13:44 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:11.178 [2024-04-17 10:13:44.302333] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:11.178 [2024-04-17 10:13:44.302365] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3407417 ] 00:17:11.178 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.744 Attached to nqn.2016-06.io.spdk:cnode1 00:17:11.744 Namespace ID: 1 size: 1GB 00:17:11.744 fused_ordering(0) 00:17:11.744 fused_ordering(1) 00:17:11.744 fused_ordering(2) 00:17:11.744 fused_ordering(3) 00:17:11.744 fused_ordering(4) 00:17:11.744 fused_ordering(5) 00:17:11.744 fused_ordering(6) 00:17:11.744 fused_ordering(7) 00:17:11.744 fused_ordering(8) 00:17:11.744 fused_ordering(9) 00:17:11.744 fused_ordering(10) 00:17:11.744 fused_ordering(11) 00:17:11.744 fused_ordering(12) 00:17:11.744 fused_ordering(13) 00:17:11.744 fused_ordering(14) 00:17:11.744 fused_ordering(15) 00:17:11.744 fused_ordering(16) 00:17:11.744 fused_ordering(17) 00:17:11.744 fused_ordering(18) 00:17:11.744 fused_ordering(19) 00:17:11.744 fused_ordering(20) 00:17:11.744 fused_ordering(21) 00:17:11.744 fused_ordering(22) 00:17:11.744 fused_ordering(23) 00:17:11.744 fused_ordering(24) 00:17:11.744 fused_ordering(25) 00:17:11.744 fused_ordering(26) 00:17:11.744 fused_ordering(27) 00:17:11.744 fused_ordering(28) 00:17:11.744 fused_ordering(29) 00:17:11.744 fused_ordering(30) 00:17:11.744 fused_ordering(31) 00:17:11.744 fused_ordering(32) 00:17:11.744 fused_ordering(33) 00:17:11.744 fused_ordering(34) 00:17:11.744 fused_ordering(35) 00:17:11.744 fused_ordering(36) 00:17:11.744 fused_ordering(37) 00:17:11.744 fused_ordering(38) 00:17:11.744 fused_ordering(39) 00:17:11.744 fused_ordering(40) 00:17:11.744 fused_ordering(41) 00:17:11.744 fused_ordering(42) 00:17:11.744 fused_ordering(43) 00:17:11.744 fused_ordering(44) 00:17:11.744 fused_ordering(45) 00:17:11.744 fused_ordering(46) 00:17:11.744 fused_ordering(47) 00:17:11.744 fused_ordering(48) 00:17:11.744 fused_ordering(49) 00:17:11.744 fused_ordering(50) 00:17:11.744 fused_ordering(51) 00:17:11.744 fused_ordering(52) 00:17:11.744 fused_ordering(53) 00:17:11.744 fused_ordering(54) 00:17:11.744 fused_ordering(55) 00:17:11.744 fused_ordering(56) 00:17:11.744 fused_ordering(57) 00:17:11.744 fused_ordering(58) 00:17:11.744 fused_ordering(59) 00:17:11.744 fused_ordering(60) 00:17:11.744 fused_ordering(61) 00:17:11.744 fused_ordering(62) 00:17:11.744 fused_ordering(63) 00:17:11.744 fused_ordering(64) 00:17:11.744 fused_ordering(65) 00:17:11.744 fused_ordering(66) 00:17:11.744 fused_ordering(67) 00:17:11.744 fused_ordering(68) 00:17:11.744 fused_ordering(69) 00:17:11.744 fused_ordering(70) 00:17:11.744 fused_ordering(71) 00:17:11.744 fused_ordering(72) 00:17:11.744 fused_ordering(73) 00:17:11.744 fused_ordering(74) 00:17:11.744 fused_ordering(75) 00:17:11.744 fused_ordering(76) 00:17:11.744 fused_ordering(77) 00:17:11.744 fused_ordering(78) 00:17:11.744 fused_ordering(79) 00:17:11.744 fused_ordering(80) 00:17:11.744 fused_ordering(81) 00:17:11.744 fused_ordering(82) 00:17:11.744 fused_ordering(83) 00:17:11.744 fused_ordering(84) 00:17:11.744 fused_ordering(85) 00:17:11.744 fused_ordering(86) 00:17:11.744 fused_ordering(87) 00:17:11.744 fused_ordering(88) 00:17:11.744 fused_ordering(89) 00:17:11.744 fused_ordering(90) 00:17:11.744 fused_ordering(91) 00:17:11.744 fused_ordering(92) 00:17:11.744 fused_ordering(93) 00:17:11.744 fused_ordering(94) 00:17:11.744 fused_ordering(95) 00:17:11.744 fused_ordering(96) 00:17:11.744 fused_ordering(97) 00:17:11.744 fused_ordering(98) 00:17:11.744 fused_ordering(99) 00:17:11.744 fused_ordering(100) 00:17:11.744 fused_ordering(101) 00:17:11.744 fused_ordering(102) 00:17:11.744 fused_ordering(103) 00:17:11.744 fused_ordering(104) 00:17:11.744 fused_ordering(105) 00:17:11.744 fused_ordering(106) 00:17:11.744 fused_ordering(107) 00:17:11.744 fused_ordering(108) 00:17:11.744 fused_ordering(109) 00:17:11.744 fused_ordering(110) 00:17:11.744 fused_ordering(111) 00:17:11.744 fused_ordering(112) 00:17:11.744 fused_ordering(113) 00:17:11.744 fused_ordering(114) 00:17:11.744 fused_ordering(115) 00:17:11.744 fused_ordering(116) 00:17:11.744 fused_ordering(117) 00:17:11.744 fused_ordering(118) 00:17:11.744 fused_ordering(119) 00:17:11.744 fused_ordering(120) 00:17:11.744 fused_ordering(121) 00:17:11.744 fused_ordering(122) 00:17:11.744 fused_ordering(123) 00:17:11.744 fused_ordering(124) 00:17:11.744 fused_ordering(125) 00:17:11.744 fused_ordering(126) 00:17:11.744 fused_ordering(127) 00:17:11.744 fused_ordering(128) 00:17:11.744 fused_ordering(129) 00:17:11.744 fused_ordering(130) 00:17:11.744 fused_ordering(131) 00:17:11.744 fused_ordering(132) 00:17:11.744 fused_ordering(133) 00:17:11.744 fused_ordering(134) 00:17:11.744 fused_ordering(135) 00:17:11.744 fused_ordering(136) 00:17:11.744 fused_ordering(137) 00:17:11.744 fused_ordering(138) 00:17:11.744 fused_ordering(139) 00:17:11.744 fused_ordering(140) 00:17:11.744 fused_ordering(141) 00:17:11.744 fused_ordering(142) 00:17:11.744 fused_ordering(143) 00:17:11.744 fused_ordering(144) 00:17:11.744 fused_ordering(145) 00:17:11.744 fused_ordering(146) 00:17:11.744 fused_ordering(147) 00:17:11.744 fused_ordering(148) 00:17:11.744 fused_ordering(149) 00:17:11.744 fused_ordering(150) 00:17:11.744 fused_ordering(151) 00:17:11.744 fused_ordering(152) 00:17:11.744 fused_ordering(153) 00:17:11.744 fused_ordering(154) 00:17:11.744 fused_ordering(155) 00:17:11.744 fused_ordering(156) 00:17:11.744 fused_ordering(157) 00:17:11.744 fused_ordering(158) 00:17:11.744 fused_ordering(159) 00:17:11.744 fused_ordering(160) 00:17:11.744 fused_ordering(161) 00:17:11.744 fused_ordering(162) 00:17:11.744 fused_ordering(163) 00:17:11.744 fused_ordering(164) 00:17:11.745 fused_ordering(165) 00:17:11.745 fused_ordering(166) 00:17:11.745 fused_ordering(167) 00:17:11.745 fused_ordering(168) 00:17:11.745 fused_ordering(169) 00:17:11.745 fused_ordering(170) 00:17:11.745 fused_ordering(171) 00:17:11.745 fused_ordering(172) 00:17:11.745 fused_ordering(173) 00:17:11.745 fused_ordering(174) 00:17:11.745 fused_ordering(175) 00:17:11.745 fused_ordering(176) 00:17:11.745 fused_ordering(177) 00:17:11.745 fused_ordering(178) 00:17:11.745 fused_ordering(179) 00:17:11.745 fused_ordering(180) 00:17:11.745 fused_ordering(181) 00:17:11.745 fused_ordering(182) 00:17:11.745 fused_ordering(183) 00:17:11.745 fused_ordering(184) 00:17:11.745 fused_ordering(185) 00:17:11.745 fused_ordering(186) 00:17:11.745 fused_ordering(187) 00:17:11.745 fused_ordering(188) 00:17:11.745 fused_ordering(189) 00:17:11.745 fused_ordering(190) 00:17:11.745 fused_ordering(191) 00:17:11.745 fused_ordering(192) 00:17:11.745 fused_ordering(193) 00:17:11.745 fused_ordering(194) 00:17:11.745 fused_ordering(195) 00:17:11.745 fused_ordering(196) 00:17:11.745 fused_ordering(197) 00:17:11.745 fused_ordering(198) 00:17:11.745 fused_ordering(199) 00:17:11.745 fused_ordering(200) 00:17:11.745 fused_ordering(201) 00:17:11.745 fused_ordering(202) 00:17:11.745 fused_ordering(203) 00:17:11.745 fused_ordering(204) 00:17:11.745 fused_ordering(205) 00:17:12.003 fused_ordering(206) 00:17:12.003 fused_ordering(207) 00:17:12.003 fused_ordering(208) 00:17:12.003 fused_ordering(209) 00:17:12.003 fused_ordering(210) 00:17:12.003 fused_ordering(211) 00:17:12.003 fused_ordering(212) 00:17:12.003 fused_ordering(213) 00:17:12.003 fused_ordering(214) 00:17:12.003 fused_ordering(215) 00:17:12.003 fused_ordering(216) 00:17:12.003 fused_ordering(217) 00:17:12.003 fused_ordering(218) 00:17:12.003 fused_ordering(219) 00:17:12.003 fused_ordering(220) 00:17:12.003 fused_ordering(221) 00:17:12.003 fused_ordering(222) 00:17:12.003 fused_ordering(223) 00:17:12.003 fused_ordering(224) 00:17:12.003 fused_ordering(225) 00:17:12.003 fused_ordering(226) 00:17:12.003 fused_ordering(227) 00:17:12.003 fused_ordering(228) 00:17:12.003 fused_ordering(229) 00:17:12.003 fused_ordering(230) 00:17:12.003 fused_ordering(231) 00:17:12.003 fused_ordering(232) 00:17:12.003 fused_ordering(233) 00:17:12.003 fused_ordering(234) 00:17:12.003 fused_ordering(235) 00:17:12.003 fused_ordering(236) 00:17:12.003 fused_ordering(237) 00:17:12.003 fused_ordering(238) 00:17:12.003 fused_ordering(239) 00:17:12.003 fused_ordering(240) 00:17:12.003 fused_ordering(241) 00:17:12.003 fused_ordering(242) 00:17:12.003 fused_ordering(243) 00:17:12.003 fused_ordering(244) 00:17:12.003 fused_ordering(245) 00:17:12.003 fused_ordering(246) 00:17:12.003 fused_ordering(247) 00:17:12.003 fused_ordering(248) 00:17:12.003 fused_ordering(249) 00:17:12.003 fused_ordering(250) 00:17:12.003 fused_ordering(251) 00:17:12.003 fused_ordering(252) 00:17:12.003 fused_ordering(253) 00:17:12.003 fused_ordering(254) 00:17:12.003 fused_ordering(255) 00:17:12.003 fused_ordering(256) 00:17:12.003 fused_ordering(257) 00:17:12.004 fused_ordering(258) 00:17:12.004 fused_ordering(259) 00:17:12.004 fused_ordering(260) 00:17:12.004 fused_ordering(261) 00:17:12.004 fused_ordering(262) 00:17:12.004 fused_ordering(263) 00:17:12.004 fused_ordering(264) 00:17:12.004 fused_ordering(265) 00:17:12.004 fused_ordering(266) 00:17:12.004 fused_ordering(267) 00:17:12.004 fused_ordering(268) 00:17:12.004 fused_ordering(269) 00:17:12.004 fused_ordering(270) 00:17:12.004 fused_ordering(271) 00:17:12.004 fused_ordering(272) 00:17:12.004 fused_ordering(273) 00:17:12.004 fused_ordering(274) 00:17:12.004 fused_ordering(275) 00:17:12.004 fused_ordering(276) 00:17:12.004 fused_ordering(277) 00:17:12.004 fused_ordering(278) 00:17:12.004 fused_ordering(279) 00:17:12.004 fused_ordering(280) 00:17:12.004 fused_ordering(281) 00:17:12.004 fused_ordering(282) 00:17:12.004 fused_ordering(283) 00:17:12.004 fused_ordering(284) 00:17:12.004 fused_ordering(285) 00:17:12.004 fused_ordering(286) 00:17:12.004 fused_ordering(287) 00:17:12.004 fused_ordering(288) 00:17:12.004 fused_ordering(289) 00:17:12.004 fused_ordering(290) 00:17:12.004 fused_ordering(291) 00:17:12.004 fused_ordering(292) 00:17:12.004 fused_ordering(293) 00:17:12.004 fused_ordering(294) 00:17:12.004 fused_ordering(295) 00:17:12.004 fused_ordering(296) 00:17:12.004 fused_ordering(297) 00:17:12.004 fused_ordering(298) 00:17:12.004 fused_ordering(299) 00:17:12.004 fused_ordering(300) 00:17:12.004 fused_ordering(301) 00:17:12.004 fused_ordering(302) 00:17:12.004 fused_ordering(303) 00:17:12.004 fused_ordering(304) 00:17:12.004 fused_ordering(305) 00:17:12.004 fused_ordering(306) 00:17:12.004 fused_ordering(307) 00:17:12.004 fused_ordering(308) 00:17:12.004 fused_ordering(309) 00:17:12.004 fused_ordering(310) 00:17:12.004 fused_ordering(311) 00:17:12.004 fused_ordering(312) 00:17:12.004 fused_ordering(313) 00:17:12.004 fused_ordering(314) 00:17:12.004 fused_ordering(315) 00:17:12.004 fused_ordering(316) 00:17:12.004 fused_ordering(317) 00:17:12.004 fused_ordering(318) 00:17:12.004 fused_ordering(319) 00:17:12.004 fused_ordering(320) 00:17:12.004 fused_ordering(321) 00:17:12.004 fused_ordering(322) 00:17:12.004 fused_ordering(323) 00:17:12.004 fused_ordering(324) 00:17:12.004 fused_ordering(325) 00:17:12.004 fused_ordering(326) 00:17:12.004 fused_ordering(327) 00:17:12.004 fused_ordering(328) 00:17:12.004 fused_ordering(329) 00:17:12.004 fused_ordering(330) 00:17:12.004 fused_ordering(331) 00:17:12.004 fused_ordering(332) 00:17:12.004 fused_ordering(333) 00:17:12.004 fused_ordering(334) 00:17:12.004 fused_ordering(335) 00:17:12.004 fused_ordering(336) 00:17:12.004 fused_ordering(337) 00:17:12.004 fused_ordering(338) 00:17:12.004 fused_ordering(339) 00:17:12.004 fused_ordering(340) 00:17:12.004 fused_ordering(341) 00:17:12.004 fused_ordering(342) 00:17:12.004 fused_ordering(343) 00:17:12.004 fused_ordering(344) 00:17:12.004 fused_ordering(345) 00:17:12.004 fused_ordering(346) 00:17:12.004 fused_ordering(347) 00:17:12.004 fused_ordering(348) 00:17:12.004 fused_ordering(349) 00:17:12.004 fused_ordering(350) 00:17:12.004 fused_ordering(351) 00:17:12.004 fused_ordering(352) 00:17:12.004 fused_ordering(353) 00:17:12.004 fused_ordering(354) 00:17:12.004 fused_ordering(355) 00:17:12.004 fused_ordering(356) 00:17:12.004 fused_ordering(357) 00:17:12.004 fused_ordering(358) 00:17:12.004 fused_ordering(359) 00:17:12.004 fused_ordering(360) 00:17:12.004 fused_ordering(361) 00:17:12.004 fused_ordering(362) 00:17:12.004 fused_ordering(363) 00:17:12.004 fused_ordering(364) 00:17:12.004 fused_ordering(365) 00:17:12.004 fused_ordering(366) 00:17:12.004 fused_ordering(367) 00:17:12.004 fused_ordering(368) 00:17:12.004 fused_ordering(369) 00:17:12.004 fused_ordering(370) 00:17:12.004 fused_ordering(371) 00:17:12.004 fused_ordering(372) 00:17:12.004 fused_ordering(373) 00:17:12.004 fused_ordering(374) 00:17:12.004 fused_ordering(375) 00:17:12.004 fused_ordering(376) 00:17:12.004 fused_ordering(377) 00:17:12.004 fused_ordering(378) 00:17:12.004 fused_ordering(379) 00:17:12.004 fused_ordering(380) 00:17:12.004 fused_ordering(381) 00:17:12.004 fused_ordering(382) 00:17:12.004 fused_ordering(383) 00:17:12.004 fused_ordering(384) 00:17:12.004 fused_ordering(385) 00:17:12.004 fused_ordering(386) 00:17:12.004 fused_ordering(387) 00:17:12.004 fused_ordering(388) 00:17:12.004 fused_ordering(389) 00:17:12.004 fused_ordering(390) 00:17:12.004 fused_ordering(391) 00:17:12.004 fused_ordering(392) 00:17:12.004 fused_ordering(393) 00:17:12.004 fused_ordering(394) 00:17:12.004 fused_ordering(395) 00:17:12.004 fused_ordering(396) 00:17:12.004 fused_ordering(397) 00:17:12.004 fused_ordering(398) 00:17:12.004 fused_ordering(399) 00:17:12.004 fused_ordering(400) 00:17:12.004 fused_ordering(401) 00:17:12.004 fused_ordering(402) 00:17:12.004 fused_ordering(403) 00:17:12.004 fused_ordering(404) 00:17:12.004 fused_ordering(405) 00:17:12.004 fused_ordering(406) 00:17:12.004 fused_ordering(407) 00:17:12.004 fused_ordering(408) 00:17:12.004 fused_ordering(409) 00:17:12.004 fused_ordering(410) 00:17:12.571 fused_ordering(411) 00:17:12.571 fused_ordering(412) 00:17:12.571 fused_ordering(413) 00:17:12.571 fused_ordering(414) 00:17:12.571 fused_ordering(415) 00:17:12.571 fused_ordering(416) 00:17:12.571 fused_ordering(417) 00:17:12.571 fused_ordering(418) 00:17:12.571 fused_ordering(419) 00:17:12.571 fused_ordering(420) 00:17:12.571 fused_ordering(421) 00:17:12.571 fused_ordering(422) 00:17:12.571 fused_ordering(423) 00:17:12.571 fused_ordering(424) 00:17:12.571 fused_ordering(425) 00:17:12.571 fused_ordering(426) 00:17:12.571 fused_ordering(427) 00:17:12.571 fused_ordering(428) 00:17:12.571 fused_ordering(429) 00:17:12.571 fused_ordering(430) 00:17:12.571 fused_ordering(431) 00:17:12.571 fused_ordering(432) 00:17:12.571 fused_ordering(433) 00:17:12.571 fused_ordering(434) 00:17:12.571 fused_ordering(435) 00:17:12.571 fused_ordering(436) 00:17:12.571 fused_ordering(437) 00:17:12.571 fused_ordering(438) 00:17:12.571 fused_ordering(439) 00:17:12.571 fused_ordering(440) 00:17:12.571 fused_ordering(441) 00:17:12.571 fused_ordering(442) 00:17:12.571 fused_ordering(443) 00:17:12.571 fused_ordering(444) 00:17:12.571 fused_ordering(445) 00:17:12.571 fused_ordering(446) 00:17:12.571 fused_ordering(447) 00:17:12.571 fused_ordering(448) 00:17:12.571 fused_ordering(449) 00:17:12.571 fused_ordering(450) 00:17:12.571 fused_ordering(451) 00:17:12.571 fused_ordering(452) 00:17:12.571 fused_ordering(453) 00:17:12.571 fused_ordering(454) 00:17:12.571 fused_ordering(455) 00:17:12.571 fused_ordering(456) 00:17:12.571 fused_ordering(457) 00:17:12.571 fused_ordering(458) 00:17:12.571 fused_ordering(459) 00:17:12.571 fused_ordering(460) 00:17:12.571 fused_ordering(461) 00:17:12.571 fused_ordering(462) 00:17:12.571 fused_ordering(463) 00:17:12.571 fused_ordering(464) 00:17:12.571 fused_ordering(465) 00:17:12.571 fused_ordering(466) 00:17:12.571 fused_ordering(467) 00:17:12.571 fused_ordering(468) 00:17:12.571 fused_ordering(469) 00:17:12.571 fused_ordering(470) 00:17:12.571 fused_ordering(471) 00:17:12.571 fused_ordering(472) 00:17:12.571 fused_ordering(473) 00:17:12.571 fused_ordering(474) 00:17:12.571 fused_ordering(475) 00:17:12.571 fused_ordering(476) 00:17:12.571 fused_ordering(477) 00:17:12.571 fused_ordering(478) 00:17:12.571 fused_ordering(479) 00:17:12.571 fused_ordering(480) 00:17:12.571 fused_ordering(481) 00:17:12.571 fused_ordering(482) 00:17:12.571 fused_ordering(483) 00:17:12.571 fused_ordering(484) 00:17:12.571 fused_ordering(485) 00:17:12.571 fused_ordering(486) 00:17:12.571 fused_ordering(487) 00:17:12.571 fused_ordering(488) 00:17:12.571 fused_ordering(489) 00:17:12.571 fused_ordering(490) 00:17:12.571 fused_ordering(491) 00:17:12.571 fused_ordering(492) 00:17:12.571 fused_ordering(493) 00:17:12.571 fused_ordering(494) 00:17:12.571 fused_ordering(495) 00:17:12.571 fused_ordering(496) 00:17:12.571 fused_ordering(497) 00:17:12.571 fused_ordering(498) 00:17:12.571 fused_ordering(499) 00:17:12.571 fused_ordering(500) 00:17:12.571 fused_ordering(501) 00:17:12.571 fused_ordering(502) 00:17:12.571 fused_ordering(503) 00:17:12.571 fused_ordering(504) 00:17:12.571 fused_ordering(505) 00:17:12.571 fused_ordering(506) 00:17:12.571 fused_ordering(507) 00:17:12.571 fused_ordering(508) 00:17:12.571 fused_ordering(509) 00:17:12.571 fused_ordering(510) 00:17:12.571 fused_ordering(511) 00:17:12.571 fused_ordering(512) 00:17:12.571 fused_ordering(513) 00:17:12.571 fused_ordering(514) 00:17:12.571 fused_ordering(515) 00:17:12.571 fused_ordering(516) 00:17:12.571 fused_ordering(517) 00:17:12.571 fused_ordering(518) 00:17:12.571 fused_ordering(519) 00:17:12.571 fused_ordering(520) 00:17:12.571 fused_ordering(521) 00:17:12.571 fused_ordering(522) 00:17:12.571 fused_ordering(523) 00:17:12.571 fused_ordering(524) 00:17:12.571 fused_ordering(525) 00:17:12.571 fused_ordering(526) 00:17:12.571 fused_ordering(527) 00:17:12.571 fused_ordering(528) 00:17:12.571 fused_ordering(529) 00:17:12.571 fused_ordering(530) 00:17:12.571 fused_ordering(531) 00:17:12.571 fused_ordering(532) 00:17:12.571 fused_ordering(533) 00:17:12.571 fused_ordering(534) 00:17:12.571 fused_ordering(535) 00:17:12.571 fused_ordering(536) 00:17:12.571 fused_ordering(537) 00:17:12.571 fused_ordering(538) 00:17:12.571 fused_ordering(539) 00:17:12.571 fused_ordering(540) 00:17:12.571 fused_ordering(541) 00:17:12.571 fused_ordering(542) 00:17:12.571 fused_ordering(543) 00:17:12.571 fused_ordering(544) 00:17:12.571 fused_ordering(545) 00:17:12.571 fused_ordering(546) 00:17:12.571 fused_ordering(547) 00:17:12.571 fused_ordering(548) 00:17:12.571 fused_ordering(549) 00:17:12.571 fused_ordering(550) 00:17:12.571 fused_ordering(551) 00:17:12.571 fused_ordering(552) 00:17:12.571 fused_ordering(553) 00:17:12.571 fused_ordering(554) 00:17:12.571 fused_ordering(555) 00:17:12.571 fused_ordering(556) 00:17:12.571 fused_ordering(557) 00:17:12.571 fused_ordering(558) 00:17:12.571 fused_ordering(559) 00:17:12.571 fused_ordering(560) 00:17:12.571 fused_ordering(561) 00:17:12.571 fused_ordering(562) 00:17:12.571 fused_ordering(563) 00:17:12.571 fused_ordering(564) 00:17:12.571 fused_ordering(565) 00:17:12.571 fused_ordering(566) 00:17:12.571 fused_ordering(567) 00:17:12.571 fused_ordering(568) 00:17:12.571 fused_ordering(569) 00:17:12.571 fused_ordering(570) 00:17:12.571 fused_ordering(571) 00:17:12.571 fused_ordering(572) 00:17:12.571 fused_ordering(573) 00:17:12.571 fused_ordering(574) 00:17:12.571 fused_ordering(575) 00:17:12.571 fused_ordering(576) 00:17:12.571 fused_ordering(577) 00:17:12.571 fused_ordering(578) 00:17:12.571 fused_ordering(579) 00:17:12.571 fused_ordering(580) 00:17:12.571 fused_ordering(581) 00:17:12.571 fused_ordering(582) 00:17:12.571 fused_ordering(583) 00:17:12.571 fused_ordering(584) 00:17:12.571 fused_ordering(585) 00:17:12.571 fused_ordering(586) 00:17:12.571 fused_ordering(587) 00:17:12.571 fused_ordering(588) 00:17:12.571 fused_ordering(589) 00:17:12.571 fused_ordering(590) 00:17:12.571 fused_ordering(591) 00:17:12.571 fused_ordering(592) 00:17:12.571 fused_ordering(593) 00:17:12.571 fused_ordering(594) 00:17:12.571 fused_ordering(595) 00:17:12.571 fused_ordering(596) 00:17:12.571 fused_ordering(597) 00:17:12.571 fused_ordering(598) 00:17:12.571 fused_ordering(599) 00:17:12.571 fused_ordering(600) 00:17:12.571 fused_ordering(601) 00:17:12.571 fused_ordering(602) 00:17:12.571 fused_ordering(603) 00:17:12.571 fused_ordering(604) 00:17:12.571 fused_ordering(605) 00:17:12.571 fused_ordering(606) 00:17:12.571 fused_ordering(607) 00:17:12.571 fused_ordering(608) 00:17:12.571 fused_ordering(609) 00:17:12.571 fused_ordering(610) 00:17:12.571 fused_ordering(611) 00:17:12.571 fused_ordering(612) 00:17:12.571 fused_ordering(613) 00:17:12.571 fused_ordering(614) 00:17:12.571 fused_ordering(615) 00:17:13.138 fused_ordering(616) 00:17:13.138 fused_ordering(617) 00:17:13.138 fused_ordering(618) 00:17:13.138 fused_ordering(619) 00:17:13.138 fused_ordering(620) 00:17:13.138 fused_ordering(621) 00:17:13.138 fused_ordering(622) 00:17:13.138 fused_ordering(623) 00:17:13.138 fused_ordering(624) 00:17:13.138 fused_ordering(625) 00:17:13.138 fused_ordering(626) 00:17:13.138 fused_ordering(627) 00:17:13.138 fused_ordering(628) 00:17:13.138 fused_ordering(629) 00:17:13.138 fused_ordering(630) 00:17:13.138 fused_ordering(631) 00:17:13.138 fused_ordering(632) 00:17:13.138 fused_ordering(633) 00:17:13.138 fused_ordering(634) 00:17:13.138 fused_ordering(635) 00:17:13.138 fused_ordering(636) 00:17:13.138 fused_ordering(637) 00:17:13.138 fused_ordering(638) 00:17:13.138 fused_ordering(639) 00:17:13.138 fused_ordering(640) 00:17:13.138 fused_ordering(641) 00:17:13.138 fused_ordering(642) 00:17:13.138 fused_ordering(643) 00:17:13.138 fused_ordering(644) 00:17:13.138 fused_ordering(645) 00:17:13.138 fused_ordering(646) 00:17:13.138 fused_ordering(647) 00:17:13.138 fused_ordering(648) 00:17:13.138 fused_ordering(649) 00:17:13.138 fused_ordering(650) 00:17:13.138 fused_ordering(651) 00:17:13.138 fused_ordering(652) 00:17:13.138 fused_ordering(653) 00:17:13.138 fused_ordering(654) 00:17:13.138 fused_ordering(655) 00:17:13.138 fused_ordering(656) 00:17:13.138 fused_ordering(657) 00:17:13.138 fused_ordering(658) 00:17:13.138 fused_ordering(659) 00:17:13.138 fused_ordering(660) 00:17:13.138 fused_ordering(661) 00:17:13.138 fused_ordering(662) 00:17:13.138 fused_ordering(663) 00:17:13.138 fused_ordering(664) 00:17:13.138 fused_ordering(665) 00:17:13.138 fused_ordering(666) 00:17:13.138 fused_ordering(667) 00:17:13.138 fused_ordering(668) 00:17:13.138 fused_ordering(669) 00:17:13.138 fused_ordering(670) 00:17:13.138 fused_ordering(671) 00:17:13.138 fused_ordering(672) 00:17:13.138 fused_ordering(673) 00:17:13.138 fused_ordering(674) 00:17:13.138 fused_ordering(675) 00:17:13.138 fused_ordering(676) 00:17:13.138 fused_ordering(677) 00:17:13.138 fused_ordering(678) 00:17:13.138 fused_ordering(679) 00:17:13.138 fused_ordering(680) 00:17:13.138 fused_ordering(681) 00:17:13.138 fused_ordering(682) 00:17:13.138 fused_ordering(683) 00:17:13.138 fused_ordering(684) 00:17:13.138 fused_ordering(685) 00:17:13.138 fused_ordering(686) 00:17:13.138 fused_ordering(687) 00:17:13.138 fused_ordering(688) 00:17:13.138 fused_ordering(689) 00:17:13.138 fused_ordering(690) 00:17:13.138 fused_ordering(691) 00:17:13.138 fused_ordering(692) 00:17:13.138 fused_ordering(693) 00:17:13.138 fused_ordering(694) 00:17:13.138 fused_ordering(695) 00:17:13.138 fused_ordering(696) 00:17:13.138 fused_ordering(697) 00:17:13.139 fused_ordering(698) 00:17:13.139 fused_ordering(699) 00:17:13.139 fused_ordering(700) 00:17:13.139 fused_ordering(701) 00:17:13.139 fused_ordering(702) 00:17:13.139 fused_ordering(703) 00:17:13.139 fused_ordering(704) 00:17:13.139 fused_ordering(705) 00:17:13.139 fused_ordering(706) 00:17:13.139 fused_ordering(707) 00:17:13.139 fused_ordering(708) 00:17:13.139 fused_ordering(709) 00:17:13.139 fused_ordering(710) 00:17:13.139 fused_ordering(711) 00:17:13.139 fused_ordering(712) 00:17:13.139 fused_ordering(713) 00:17:13.139 fused_ordering(714) 00:17:13.139 fused_ordering(715) 00:17:13.139 fused_ordering(716) 00:17:13.139 fused_ordering(717) 00:17:13.139 fused_ordering(718) 00:17:13.139 fused_ordering(719) 00:17:13.139 fused_ordering(720) 00:17:13.139 fused_ordering(721) 00:17:13.139 fused_ordering(722) 00:17:13.139 fused_ordering(723) 00:17:13.139 fused_ordering(724) 00:17:13.139 fused_ordering(725) 00:17:13.139 fused_ordering(726) 00:17:13.139 fused_ordering(727) 00:17:13.139 fused_ordering(728) 00:17:13.139 fused_ordering(729) 00:17:13.139 fused_ordering(730) 00:17:13.139 fused_ordering(731) 00:17:13.139 fused_ordering(732) 00:17:13.139 fused_ordering(733) 00:17:13.139 fused_ordering(734) 00:17:13.139 fused_ordering(735) 00:17:13.139 fused_ordering(736) 00:17:13.139 fused_ordering(737) 00:17:13.139 fused_ordering(738) 00:17:13.139 fused_ordering(739) 00:17:13.139 fused_ordering(740) 00:17:13.139 fused_ordering(741) 00:17:13.139 fused_ordering(742) 00:17:13.139 fused_ordering(743) 00:17:13.139 fused_ordering(744) 00:17:13.139 fused_ordering(745) 00:17:13.139 fused_ordering(746) 00:17:13.139 fused_ordering(747) 00:17:13.139 fused_ordering(748) 00:17:13.139 fused_ordering(749) 00:17:13.139 fused_ordering(750) 00:17:13.139 fused_ordering(751) 00:17:13.139 fused_ordering(752) 00:17:13.139 fused_ordering(753) 00:17:13.139 fused_ordering(754) 00:17:13.139 fused_ordering(755) 00:17:13.139 fused_ordering(756) 00:17:13.139 fused_ordering(757) 00:17:13.139 fused_ordering(758) 00:17:13.139 fused_ordering(759) 00:17:13.139 fused_ordering(760) 00:17:13.139 fused_ordering(761) 00:17:13.139 fused_ordering(762) 00:17:13.139 fused_ordering(763) 00:17:13.139 fused_ordering(764) 00:17:13.139 fused_ordering(765) 00:17:13.139 fused_ordering(766) 00:17:13.139 fused_ordering(767) 00:17:13.139 fused_ordering(768) 00:17:13.139 fused_ordering(769) 00:17:13.139 fused_ordering(770) 00:17:13.139 fused_ordering(771) 00:17:13.139 fused_ordering(772) 00:17:13.139 fused_ordering(773) 00:17:13.139 fused_ordering(774) 00:17:13.139 fused_ordering(775) 00:17:13.139 fused_ordering(776) 00:17:13.139 fused_ordering(777) 00:17:13.139 fused_ordering(778) 00:17:13.139 fused_ordering(779) 00:17:13.139 fused_ordering(780) 00:17:13.139 fused_ordering(781) 00:17:13.139 fused_ordering(782) 00:17:13.139 fused_ordering(783) 00:17:13.139 fused_ordering(784) 00:17:13.139 fused_ordering(785) 00:17:13.139 fused_ordering(786) 00:17:13.139 fused_ordering(787) 00:17:13.139 fused_ordering(788) 00:17:13.139 fused_ordering(789) 00:17:13.139 fused_ordering(790) 00:17:13.139 fused_ordering(791) 00:17:13.139 fused_ordering(792) 00:17:13.139 fused_ordering(793) 00:17:13.139 fused_ordering(794) 00:17:13.139 fused_ordering(795) 00:17:13.139 fused_ordering(796) 00:17:13.139 fused_ordering(797) 00:17:13.139 fused_ordering(798) 00:17:13.139 fused_ordering(799) 00:17:13.139 fused_ordering(800) 00:17:13.139 fused_ordering(801) 00:17:13.139 fused_ordering(802) 00:17:13.139 fused_ordering(803) 00:17:13.139 fused_ordering(804) 00:17:13.139 fused_ordering(805) 00:17:13.139 fused_ordering(806) 00:17:13.139 fused_ordering(807) 00:17:13.139 fused_ordering(808) 00:17:13.139 fused_ordering(809) 00:17:13.139 fused_ordering(810) 00:17:13.139 fused_ordering(811) 00:17:13.139 fused_ordering(812) 00:17:13.139 fused_ordering(813) 00:17:13.139 fused_ordering(814) 00:17:13.139 fused_ordering(815) 00:17:13.139 fused_ordering(816) 00:17:13.139 fused_ordering(817) 00:17:13.139 fused_ordering(818) 00:17:13.139 fused_ordering(819) 00:17:13.139 fused_ordering(820) 00:17:13.705 fused_ordering(821) 00:17:13.705 fused_ordering(822) 00:17:13.705 fused_ordering(823) 00:17:13.705 fused_ordering(824) 00:17:13.705 fused_ordering(825) 00:17:13.705 fused_ordering(826) 00:17:13.705 fused_ordering(827) 00:17:13.705 fused_ordering(828) 00:17:13.705 fused_ordering(829) 00:17:13.705 fused_ordering(830) 00:17:13.705 fused_ordering(831) 00:17:13.705 fused_ordering(832) 00:17:13.705 fused_ordering(833) 00:17:13.705 fused_ordering(834) 00:17:13.705 fused_ordering(835) 00:17:13.705 fused_ordering(836) 00:17:13.705 fused_ordering(837) 00:17:13.705 fused_ordering(838) 00:17:13.705 fused_ordering(839) 00:17:13.705 fused_ordering(840) 00:17:13.705 fused_ordering(841) 00:17:13.705 fused_ordering(842) 00:17:13.705 fused_ordering(843) 00:17:13.705 fused_ordering(844) 00:17:13.705 fused_ordering(845) 00:17:13.705 fused_ordering(846) 00:17:13.705 fused_ordering(847) 00:17:13.705 fused_ordering(848) 00:17:13.705 fused_ordering(849) 00:17:13.705 fused_ordering(850) 00:17:13.705 fused_ordering(851) 00:17:13.705 fused_ordering(852) 00:17:13.705 fused_ordering(853) 00:17:13.705 fused_ordering(854) 00:17:13.705 fused_ordering(855) 00:17:13.705 fused_ordering(856) 00:17:13.706 fused_ordering(857) 00:17:13.706 fused_ordering(858) 00:17:13.706 fused_ordering(859) 00:17:13.706 fused_ordering(860) 00:17:13.706 fused_ordering(861) 00:17:13.706 fused_ordering(862) 00:17:13.706 fused_ordering(863) 00:17:13.706 fused_ordering(864) 00:17:13.706 fused_ordering(865) 00:17:13.706 fused_ordering(866) 00:17:13.706 fused_ordering(867) 00:17:13.706 fused_ordering(868) 00:17:13.706 fused_ordering(869) 00:17:13.706 fused_ordering(870) 00:17:13.706 fused_ordering(871) 00:17:13.706 fused_ordering(872) 00:17:13.706 fused_ordering(873) 00:17:13.706 fused_ordering(874) 00:17:13.706 fused_ordering(875) 00:17:13.706 fused_ordering(876) 00:17:13.706 fused_ordering(877) 00:17:13.706 fused_ordering(878) 00:17:13.706 fused_ordering(879) 00:17:13.706 fused_ordering(880) 00:17:13.706 fused_ordering(881) 00:17:13.706 fused_ordering(882) 00:17:13.706 fused_ordering(883) 00:17:13.706 fused_ordering(884) 00:17:13.706 fused_ordering(885) 00:17:13.706 fused_ordering(886) 00:17:13.706 fused_ordering(887) 00:17:13.706 fused_ordering(888) 00:17:13.706 fused_ordering(889) 00:17:13.706 fused_ordering(890) 00:17:13.706 fused_ordering(891) 00:17:13.706 fused_ordering(892) 00:17:13.706 fused_ordering(893) 00:17:13.706 fused_ordering(894) 00:17:13.706 fused_ordering(895) 00:17:13.706 fused_ordering(896) 00:17:13.706 fused_ordering(897) 00:17:13.706 fused_ordering(898) 00:17:13.706 fused_ordering(899) 00:17:13.706 fused_ordering(900) 00:17:13.706 fused_ordering(901) 00:17:13.706 fused_ordering(902) 00:17:13.706 fused_ordering(903) 00:17:13.706 fused_ordering(904) 00:17:13.706 fused_ordering(905) 00:17:13.706 fused_ordering(906) 00:17:13.706 fused_ordering(907) 00:17:13.706 fused_ordering(908) 00:17:13.706 fused_ordering(909) 00:17:13.706 fused_ordering(910) 00:17:13.706 fused_ordering(911) 00:17:13.706 fused_ordering(912) 00:17:13.706 fused_ordering(913) 00:17:13.706 fused_ordering(914) 00:17:13.706 fused_ordering(915) 00:17:13.706 fused_ordering(916) 00:17:13.706 fused_ordering(917) 00:17:13.706 fused_ordering(918) 00:17:13.706 fused_ordering(919) 00:17:13.706 fused_ordering(920) 00:17:13.706 fused_ordering(921) 00:17:13.706 fused_ordering(922) 00:17:13.706 fused_ordering(923) 00:17:13.706 fused_ordering(924) 00:17:13.706 fused_ordering(925) 00:17:13.706 fused_ordering(926) 00:17:13.706 fused_ordering(927) 00:17:13.706 fused_ordering(928) 00:17:13.706 fused_ordering(929) 00:17:13.706 fused_ordering(930) 00:17:13.706 fused_ordering(931) 00:17:13.706 fused_ordering(932) 00:17:13.706 fused_ordering(933) 00:17:13.706 fused_ordering(934) 00:17:13.706 fused_ordering(935) 00:17:13.706 fused_ordering(936) 00:17:13.706 fused_ordering(937) 00:17:13.706 fused_ordering(938) 00:17:13.706 fused_ordering(939) 00:17:13.706 fused_ordering(940) 00:17:13.706 fused_ordering(941) 00:17:13.706 fused_ordering(942) 00:17:13.706 fused_ordering(943) 00:17:13.706 fused_ordering(944) 00:17:13.706 fused_ordering(945) 00:17:13.706 fused_ordering(946) 00:17:13.706 fused_ordering(947) 00:17:13.706 fused_ordering(948) 00:17:13.706 fused_ordering(949) 00:17:13.706 fused_ordering(950) 00:17:13.706 fused_ordering(951) 00:17:13.706 fused_ordering(952) 00:17:13.706 fused_ordering(953) 00:17:13.706 fused_ordering(954) 00:17:13.706 fused_ordering(955) 00:17:13.706 fused_ordering(956) 00:17:13.706 fused_ordering(957) 00:17:13.706 fused_ordering(958) 00:17:13.706 fused_ordering(959) 00:17:13.706 fused_ordering(960) 00:17:13.706 fused_ordering(961) 00:17:13.706 fused_ordering(962) 00:17:13.706 fused_ordering(963) 00:17:13.706 fused_ordering(964) 00:17:13.706 fused_ordering(965) 00:17:13.706 fused_ordering(966) 00:17:13.706 fused_ordering(967) 00:17:13.706 fused_ordering(968) 00:17:13.706 fused_ordering(969) 00:17:13.706 fused_ordering(970) 00:17:13.706 fused_ordering(971) 00:17:13.706 fused_ordering(972) 00:17:13.706 fused_ordering(973) 00:17:13.706 fused_ordering(974) 00:17:13.706 fused_ordering(975) 00:17:13.706 fused_ordering(976) 00:17:13.706 fused_ordering(977) 00:17:13.706 fused_ordering(978) 00:17:13.706 fused_ordering(979) 00:17:13.706 fused_ordering(980) 00:17:13.706 fused_ordering(981) 00:17:13.706 fused_ordering(982) 00:17:13.706 fused_ordering(983) 00:17:13.706 fused_ordering(984) 00:17:13.706 fused_ordering(985) 00:17:13.706 fused_ordering(986) 00:17:13.706 fused_ordering(987) 00:17:13.706 fused_ordering(988) 00:17:13.706 fused_ordering(989) 00:17:13.706 fused_ordering(990) 00:17:13.706 fused_ordering(991) 00:17:13.706 fused_ordering(992) 00:17:13.706 fused_ordering(993) 00:17:13.706 fused_ordering(994) 00:17:13.706 fused_ordering(995) 00:17:13.706 fused_ordering(996) 00:17:13.706 fused_ordering(997) 00:17:13.706 fused_ordering(998) 00:17:13.706 fused_ordering(999) 00:17:13.706 fused_ordering(1000) 00:17:13.706 fused_ordering(1001) 00:17:13.706 fused_ordering(1002) 00:17:13.706 fused_ordering(1003) 00:17:13.706 fused_ordering(1004) 00:17:13.706 fused_ordering(1005) 00:17:13.706 fused_ordering(1006) 00:17:13.706 fused_ordering(1007) 00:17:13.706 fused_ordering(1008) 00:17:13.706 fused_ordering(1009) 00:17:13.706 fused_ordering(1010) 00:17:13.706 fused_ordering(1011) 00:17:13.706 fused_ordering(1012) 00:17:13.706 fused_ordering(1013) 00:17:13.706 fused_ordering(1014) 00:17:13.706 fused_ordering(1015) 00:17:13.706 fused_ordering(1016) 00:17:13.706 fused_ordering(1017) 00:17:13.706 fused_ordering(1018) 00:17:13.706 fused_ordering(1019) 00:17:13.706 fused_ordering(1020) 00:17:13.706 fused_ordering(1021) 00:17:13.706 fused_ordering(1022) 00:17:13.706 fused_ordering(1023) 00:17:13.706 10:13:47 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:13.706 10:13:47 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:13.706 10:13:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:13.706 10:13:47 -- nvmf/common.sh@116 -- # sync 00:17:13.706 10:13:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:13.706 10:13:47 -- nvmf/common.sh@119 -- # set +e 00:17:13.706 10:13:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:13.706 10:13:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:13.706 rmmod nvme_tcp 00:17:13.964 rmmod nvme_fabrics 00:17:13.964 rmmod nvme_keyring 00:17:13.964 10:13:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:13.964 10:13:47 -- nvmf/common.sh@123 -- # set -e 00:17:13.964 10:13:47 -- nvmf/common.sh@124 -- # return 0 00:17:13.964 10:13:47 -- nvmf/common.sh@477 -- # '[' -n 3407338 ']' 00:17:13.964 10:13:47 -- nvmf/common.sh@478 -- # killprocess 3407338 00:17:13.964 10:13:47 -- common/autotest_common.sh@926 -- # '[' -z 3407338 ']' 00:17:13.964 10:13:47 -- common/autotest_common.sh@930 -- # kill -0 3407338 00:17:13.964 10:13:47 -- common/autotest_common.sh@931 -- # uname 00:17:13.964 10:13:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:13.964 10:13:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3407338 00:17:13.964 10:13:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:13.964 10:13:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:13.964 10:13:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3407338' 00:17:13.964 killing process with pid 3407338 00:17:13.964 10:13:47 -- common/autotest_common.sh@945 -- # kill 3407338 00:17:13.964 10:13:47 -- common/autotest_common.sh@950 -- # wait 3407338 00:17:14.223 10:13:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:14.223 10:13:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:14.223 10:13:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:14.223 10:13:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:14.223 10:13:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:14.223 10:13:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.223 10:13:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.223 10:13:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.126 10:13:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:16.126 00:17:16.126 real 0m12.004s 00:17:16.126 user 0m6.931s 00:17:16.126 sys 0m6.266s 00:17:16.126 10:13:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:16.126 10:13:49 -- common/autotest_common.sh@10 -- # set +x 00:17:16.126 ************************************ 00:17:16.126 END TEST nvmf_fused_ordering 00:17:16.126 ************************************ 00:17:16.385 10:13:49 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:17:16.385 10:13:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:16.385 10:13:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:16.385 10:13:49 -- common/autotest_common.sh@10 -- # set +x 00:17:16.385 ************************************ 00:17:16.385 START TEST nvmf_delete_subsystem 00:17:16.385 ************************************ 00:17:16.385 10:13:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:17:16.385 * Looking for test storage... 00:17:16.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:16.385 10:13:49 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:16.385 10:13:49 -- nvmf/common.sh@7 -- # uname -s 00:17:16.385 10:13:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:16.385 10:13:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:16.385 10:13:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:16.385 10:13:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:16.385 10:13:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:16.385 10:13:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:16.385 10:13:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:16.385 10:13:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:16.385 10:13:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:16.385 10:13:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:16.385 10:13:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:16.385 10:13:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:17:16.385 10:13:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:16.385 10:13:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:16.385 10:13:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:16.385 10:13:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:16.385 10:13:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:16.385 10:13:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:16.385 10:13:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:16.385 10:13:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.385 10:13:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.385 10:13:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.385 10:13:49 -- paths/export.sh@5 -- # export PATH 00:17:16.385 10:13:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.385 10:13:49 -- nvmf/common.sh@46 -- # : 0 00:17:16.385 10:13:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:16.385 10:13:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:16.385 10:13:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:16.385 10:13:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:16.385 10:13:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:16.385 10:13:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:16.385 10:13:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:16.385 10:13:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:16.385 10:13:49 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:17:16.385 10:13:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:16.385 10:13:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.385 10:13:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:16.385 10:13:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:16.385 10:13:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:16.385 10:13:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.385 10:13:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.385 10:13:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.385 10:13:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:16.385 10:13:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:16.385 10:13:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:16.385 10:13:49 -- common/autotest_common.sh@10 -- # set +x 00:17:21.657 10:13:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:21.657 10:13:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:21.657 10:13:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:21.657 10:13:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:21.657 10:13:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:21.657 10:13:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:21.657 10:13:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:21.657 10:13:54 -- nvmf/common.sh@294 -- # net_devs=() 00:17:21.657 10:13:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:21.657 10:13:54 -- nvmf/common.sh@295 -- # e810=() 00:17:21.657 10:13:54 -- nvmf/common.sh@295 -- # local -ga e810 00:17:21.657 10:13:54 -- nvmf/common.sh@296 -- # x722=() 00:17:21.657 10:13:54 -- nvmf/common.sh@296 -- # local -ga x722 00:17:21.657 10:13:54 -- nvmf/common.sh@297 -- # mlx=() 00:17:21.657 10:13:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:21.657 10:13:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:21.657 10:13:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:21.657 10:13:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:21.657 10:13:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:21.657 10:13:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:21.657 10:13:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:21.657 10:13:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:21.657 10:13:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:21.657 10:13:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:21.657 10:13:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:21.657 10:13:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:21.657 10:13:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:21.657 10:13:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:21.657 10:13:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:21.657 10:13:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:21.657 10:13:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:21.657 10:13:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:21.657 10:13:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:21.657 10:13:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:21.657 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:21.657 10:13:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:21.657 10:13:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:21.657 10:13:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.657 10:13:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.657 10:13:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:21.657 10:13:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:21.657 10:13:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:21.657 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:21.657 10:13:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:21.657 10:13:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:21.657 10:13:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.657 10:13:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.657 10:13:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:21.657 10:13:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:21.657 10:13:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:21.657 10:13:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:21.657 10:13:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:21.657 10:13:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.657 10:13:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:21.657 10:13:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.657 10:13:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:21.657 Found net devices under 0000:af:00.0: cvl_0_0 00:17:21.657 10:13:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.657 10:13:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:21.657 10:13:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.657 10:13:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:21.657 10:13:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.657 10:13:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:21.657 Found net devices under 0000:af:00.1: cvl_0_1 00:17:21.657 10:13:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.657 10:13:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:21.657 10:13:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:21.657 10:13:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:21.657 10:13:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:21.657 10:13:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:21.657 10:13:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:21.657 10:13:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:21.657 10:13:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:21.657 10:13:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:21.657 10:13:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:21.657 10:13:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:21.657 10:13:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:21.657 10:13:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:21.657 10:13:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:21.657 10:13:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:21.657 10:13:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:21.657 10:13:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:21.916 10:13:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:21.916 10:13:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:21.916 10:13:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:21.916 10:13:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:21.916 10:13:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:21.916 10:13:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:21.916 10:13:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:21.916 10:13:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:21.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:21.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:17:21.916 00:17:21.916 --- 10.0.0.2 ping statistics --- 00:17:21.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.916 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:17:22.175 10:13:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:22.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:22.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:17:22.175 00:17:22.175 --- 10.0.0.1 ping statistics --- 00:17:22.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.175 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:17:22.175 10:13:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:22.175 10:13:55 -- nvmf/common.sh@410 -- # return 0 00:17:22.175 10:13:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:22.175 10:13:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:22.175 10:13:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:22.175 10:13:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:22.175 10:13:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:22.175 10:13:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:22.175 10:13:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:22.175 10:13:55 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:17:22.175 10:13:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:22.175 10:13:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:22.175 10:13:55 -- common/autotest_common.sh@10 -- # set +x 00:17:22.175 10:13:55 -- nvmf/common.sh@469 -- # nvmfpid=3411632 00:17:22.175 10:13:55 -- nvmf/common.sh@470 -- # waitforlisten 3411632 00:17:22.175 10:13:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:22.175 10:13:55 -- common/autotest_common.sh@819 -- # '[' -z 3411632 ']' 00:17:22.175 10:13:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.175 10:13:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:22.175 10:13:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.175 10:13:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:22.175 10:13:55 -- common/autotest_common.sh@10 -- # set +x 00:17:22.175 [2024-04-17 10:13:55.348302] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:22.175 [2024-04-17 10:13:55.348356] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.175 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.175 [2024-04-17 10:13:55.434273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:22.435 [2024-04-17 10:13:55.520914] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:22.435 [2024-04-17 10:13:55.521055] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.435 [2024-04-17 10:13:55.521066] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.435 [2024-04-17 10:13:55.521078] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.435 [2024-04-17 10:13:55.521130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.435 [2024-04-17 10:13:55.521135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.001 10:13:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:23.001 10:13:56 -- common/autotest_common.sh@852 -- # return 0 00:17:23.001 10:13:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:23.001 10:13:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:23.001 10:13:56 -- common/autotest_common.sh@10 -- # set +x 00:17:23.001 10:13:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.001 10:13:56 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:23.001 10:13:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:23.001 10:13:56 -- common/autotest_common.sh@10 -- # set +x 00:17:23.001 [2024-04-17 10:13:56.308535] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:23.001 10:13:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:23.001 10:13:56 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:23.001 10:13:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:23.001 10:13:56 -- common/autotest_common.sh@10 -- # set +x 00:17:23.001 10:13:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:23.001 10:13:56 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:23.001 10:13:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:23.001 10:13:56 -- common/autotest_common.sh@10 -- # set +x 00:17:23.001 [2024-04-17 10:13:56.324715] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.001 10:13:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:23.001 10:13:56 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:23.001 10:13:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:23.001 10:13:56 -- common/autotest_common.sh@10 -- # set +x 00:17:23.260 NULL1 00:17:23.260 10:13:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:23.260 10:13:56 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:23.260 10:13:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:23.260 10:13:56 -- common/autotest_common.sh@10 -- # set +x 00:17:23.260 Delay0 00:17:23.260 10:13:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:23.260 10:13:56 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:23.260 10:13:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:23.260 10:13:56 -- common/autotest_common.sh@10 -- # set +x 00:17:23.260 10:13:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:23.260 10:13:56 -- target/delete_subsystem.sh@28 -- # perf_pid=3411887 00:17:23.260 10:13:56 -- target/delete_subsystem.sh@30 -- # sleep 2 00:17:23.260 10:13:56 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:23.260 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.260 [2024-04-17 10:13:56.399310] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:25.155 10:13:58 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:25.155 10:13:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:25.155 10:13:58 -- common/autotest_common.sh@10 -- # set +x 00:17:25.414 Read completed with error (sct=0, sc=8) 00:17:25.414 Read completed with error (sct=0, sc=8) 00:17:25.414 Read completed with error (sct=0, sc=8) 00:17:25.414 Write completed with error (sct=0, sc=8) 00:17:25.414 starting I/O failed: -6 00:17:25.414 Read completed with error (sct=0, sc=8) 00:17:25.414 Read completed with error (sct=0, sc=8) 00:17:25.414 Read completed with error (sct=0, sc=8) 00:17:25.414 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 [2024-04-17 10:13:58.523144] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8cdc00c350 is same with the state(5) to be set 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Write completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.415 starting I/O failed: -6 00:17:25.415 Read completed with error (sct=0, sc=8) 00:17:25.416 Write completed with error (sct=0, sc=8) 00:17:25.416 starting I/O failed: -6 00:17:25.416 Write completed with error (sct=0, sc=8) 00:17:25.416 starting I/O failed: -6 00:17:25.416 Write completed with error (sct=0, sc=8) 00:17:25.416 starting I/O failed: -6 00:17:25.416 Read completed with error (sct=0, sc=8) 00:17:25.416 Write completed with error (sct=0, sc=8) 00:17:25.416 starting I/O failed: -6 00:17:25.416 Read completed with error (sct=0, sc=8) 00:17:25.416 starting I/O failed: -6 00:17:25.416 Read completed with error (sct=0, sc=8) 00:17:25.416 starting I/O failed: -6 00:17:25.416 Read completed with error (sct=0, sc=8) 00:17:25.416 Write completed with error (sct=0, sc=8) 00:17:25.416 starting I/O failed: -6 00:17:25.416 Read completed with error (sct=0, sc=8) 00:17:25.416 starting I/O failed: -6 00:17:25.416 Read completed with error (sct=0, sc=8) 00:17:25.416 starting I/O failed: -6 00:17:25.416 Write completed with error (sct=0, sc=8) 00:17:25.416 Read completed with error (sct=0, sc=8) 00:17:25.416 starting I/O failed: -6 00:17:25.416 Read completed with error (sct=0, sc=8) 00:17:25.416 starting I/O failed: -6 00:17:25.416 Read completed with error (sct=0, sc=8) 00:17:25.416 starting I/O failed: -6 00:17:25.416 Read completed with error (sct=0, sc=8) 00:17:25.416 Read completed with error (sct=0, sc=8) 00:17:25.416 starting I/O failed: -6 00:17:25.416 Write completed with error (sct=0, sc=8) 00:17:25.416 starting I/O failed: -6 00:17:25.416 Read completed with error (sct=0, sc=8) 00:17:25.416 starting I/O failed: -6 00:17:25.416 Read completed with error (sct=0, sc=8) 00:17:25.416 Write completed with error (sct=0, sc=8) 00:17:25.416 starting I/O failed: -6 00:17:25.416 Read completed with error (sct=0, sc=8) 00:17:25.416 starting I/O failed: -6 00:17:25.416 Write completed with error (sct=0, sc=8) 00:17:25.416 starting I/O failed: -6 00:17:25.416 Write completed with error (sct=0, sc=8) 00:17:25.416 Read completed with error (sct=0, sc=8) 00:17:25.416 starting I/O failed: -6 00:17:25.416 Read completed with error (sct=0, sc=8) 00:17:25.416 starting I/O failed: -6 00:17:25.416 Read completed with error (sct=0, sc=8) 00:17:25.416 starting I/O failed: -6 00:17:25.416 Read completed with error (sct=0, sc=8) 00:17:25.416 Read completed with error (sct=0, sc=8) 00:17:25.416 starting I/O failed: -6 00:17:25.416 Read completed with error (sct=0, sc=8) 00:17:25.416 starting I/O failed: -6 00:17:25.416 Read completed with error (sct=0, sc=8) 00:17:25.416 starting I/O failed: -6 00:17:25.416 [2024-04-17 10:13:58.523592] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1515040 is same with the state(5) to be set 00:17:26.351 [2024-04-17 10:13:59.493627] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15165e0 is same with the state(5) to be set 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 [2024-04-17 10:13:59.525477] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8cdc00bf20 is same with the state(5) to be set 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 [2024-04-17 10:13:59.528061] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8cdc00c600 is same with the state(5) to be set 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 [2024-04-17 10:13:59.528735] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8cdc000c00 is same with the state(5) to be set 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Write completed with error (sct=0, sc=8) 00:17:26.351 Read completed with error (sct=0, sc=8) 00:17:26.352 Read completed with error (sct=0, sc=8) 00:17:26.352 Read completed with error (sct=0, sc=8) 00:17:26.352 Read completed with error (sct=0, sc=8) 00:17:26.352 Read completed with error (sct=0, sc=8) 00:17:26.352 Read completed with error (sct=0, sc=8) 00:17:26.352 Read completed with error (sct=0, sc=8) 00:17:26.352 Read completed with error (sct=0, sc=8) 00:17:26.352 Write completed with error (sct=0, sc=8) 00:17:26.352 Read completed with error (sct=0, sc=8) 00:17:26.352 Read completed with error (sct=0, sc=8) 00:17:26.352 Read completed with error (sct=0, sc=8) 00:17:26.352 Read completed with error (sct=0, sc=8) 00:17:26.352 Read completed with error (sct=0, sc=8) 00:17:26.352 [2024-04-17 10:13:59.528900] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x150ba90 is same with the state(5) to be set 00:17:26.352 [2024-04-17 10:13:59.529529] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15165e0 (9): Bad file descriptor 00:17:26.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:17:26.352 10:13:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:26.352 10:13:59 -- target/delete_subsystem.sh@34 -- # delay=0 00:17:26.352 10:13:59 -- target/delete_subsystem.sh@35 -- # kill -0 3411887 00:17:26.352 10:13:59 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:26.352 Initializing NVMe Controllers 00:17:26.352 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:26.352 Controller IO queue size 128, less than required. 00:17:26.352 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:26.352 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:26.352 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:26.352 Initialization complete. Launching workers. 00:17:26.352 ======================================================== 00:17:26.352 Latency(us) 00:17:26.352 Device Information : IOPS MiB/s Average min max 00:17:26.352 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.08 0.09 858370.59 293.54 1016613.42 00:17:26.352 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.66 0.08 991598.19 996.85 2005008.49 00:17:26.352 ======================================================== 00:17:26.352 Total : 338.74 0.17 923131.30 293.54 2005008.49 00:17:26.352 00:17:26.925 10:14:00 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:26.925 10:14:00 -- target/delete_subsystem.sh@35 -- # kill -0 3411887 00:17:26.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3411887) - No such process 00:17:26.925 10:14:00 -- target/delete_subsystem.sh@45 -- # NOT wait 3411887 00:17:26.925 10:14:00 -- common/autotest_common.sh@640 -- # local es=0 00:17:26.925 10:14:00 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 3411887 00:17:26.925 10:14:00 -- common/autotest_common.sh@628 -- # local arg=wait 00:17:26.925 10:14:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:26.925 10:14:00 -- common/autotest_common.sh@632 -- # type -t wait 00:17:26.925 10:14:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:26.925 10:14:00 -- common/autotest_common.sh@643 -- # wait 3411887 00:17:26.925 10:14:00 -- common/autotest_common.sh@643 -- # es=1 00:17:26.925 10:14:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:26.925 10:14:00 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:26.925 10:14:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:26.925 10:14:00 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:26.925 10:14:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:26.925 10:14:00 -- common/autotest_common.sh@10 -- # set +x 00:17:26.925 10:14:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:26.925 10:14:00 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:26.925 10:14:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:26.925 10:14:00 -- common/autotest_common.sh@10 -- # set +x 00:17:26.925 [2024-04-17 10:14:00.061531] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.925 10:14:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:26.925 10:14:00 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:26.925 10:14:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:26.925 10:14:00 -- common/autotest_common.sh@10 -- # set +x 00:17:26.925 10:14:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:26.925 10:14:00 -- target/delete_subsystem.sh@54 -- # perf_pid=3412460 00:17:26.925 10:14:00 -- target/delete_subsystem.sh@56 -- # delay=0 00:17:26.925 10:14:00 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:26.925 10:14:00 -- target/delete_subsystem.sh@57 -- # kill -0 3412460 00:17:26.925 10:14:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:26.925 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.925 [2024-04-17 10:14:00.127289] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:27.492 10:14:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:27.492 10:14:00 -- target/delete_subsystem.sh@57 -- # kill -0 3412460 00:17:27.492 10:14:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:28.058 10:14:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:28.059 10:14:01 -- target/delete_subsystem.sh@57 -- # kill -0 3412460 00:17:28.059 10:14:01 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:28.316 10:14:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:28.316 10:14:01 -- target/delete_subsystem.sh@57 -- # kill -0 3412460 00:17:28.316 10:14:01 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:28.882 10:14:02 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:28.882 10:14:02 -- target/delete_subsystem.sh@57 -- # kill -0 3412460 00:17:28.882 10:14:02 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:29.448 10:14:02 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:29.448 10:14:02 -- target/delete_subsystem.sh@57 -- # kill -0 3412460 00:17:29.448 10:14:02 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:30.014 10:14:03 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:30.014 10:14:03 -- target/delete_subsystem.sh@57 -- # kill -0 3412460 00:17:30.014 10:14:03 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:30.014 Initializing NVMe Controllers 00:17:30.014 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:30.014 Controller IO queue size 128, less than required. 00:17:30.014 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:30.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:30.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:30.014 Initialization complete. Launching workers. 00:17:30.014 ======================================================== 00:17:30.014 Latency(us) 00:17:30.014 Device Information : IOPS MiB/s Average min max 00:17:30.014 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003847.17 1000198.98 1015760.05 00:17:30.014 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006816.54 1000629.55 1016195.67 00:17:30.014 ======================================================== 00:17:30.014 Total : 256.00 0.12 1005331.86 1000198.98 1016195.67 00:17:30.014 00:17:30.581 10:14:03 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:30.581 10:14:03 -- target/delete_subsystem.sh@57 -- # kill -0 3412460 00:17:30.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3412460) - No such process 00:17:30.581 10:14:03 -- target/delete_subsystem.sh@67 -- # wait 3412460 00:17:30.581 10:14:03 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:30.581 10:14:03 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:17:30.581 10:14:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:30.581 10:14:03 -- nvmf/common.sh@116 -- # sync 00:17:30.581 10:14:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:30.581 10:14:03 -- nvmf/common.sh@119 -- # set +e 00:17:30.581 10:14:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:30.581 10:14:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:30.581 rmmod nvme_tcp 00:17:30.581 rmmod nvme_fabrics 00:17:30.581 rmmod nvme_keyring 00:17:30.581 10:14:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:30.581 10:14:03 -- nvmf/common.sh@123 -- # set -e 00:17:30.581 10:14:03 -- nvmf/common.sh@124 -- # return 0 00:17:30.581 10:14:03 -- nvmf/common.sh@477 -- # '[' -n 3411632 ']' 00:17:30.581 10:14:03 -- nvmf/common.sh@478 -- # killprocess 3411632 00:17:30.581 10:14:03 -- common/autotest_common.sh@926 -- # '[' -z 3411632 ']' 00:17:30.581 10:14:03 -- common/autotest_common.sh@930 -- # kill -0 3411632 00:17:30.581 10:14:03 -- common/autotest_common.sh@931 -- # uname 00:17:30.581 10:14:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:30.581 10:14:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3411632 00:17:30.581 10:14:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:30.581 10:14:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:30.581 10:14:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3411632' 00:17:30.581 killing process with pid 3411632 00:17:30.581 10:14:03 -- common/autotest_common.sh@945 -- # kill 3411632 00:17:30.581 10:14:03 -- common/autotest_common.sh@950 -- # wait 3411632 00:17:30.840 10:14:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:30.840 10:14:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:30.840 10:14:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:30.840 10:14:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:30.840 10:14:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:30.840 10:14:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.840 10:14:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.840 10:14:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.754 10:14:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:32.754 00:17:32.754 real 0m16.560s 00:17:32.754 user 0m30.651s 00:17:32.754 sys 0m5.184s 00:17:32.754 10:14:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:32.754 10:14:06 -- common/autotest_common.sh@10 -- # set +x 00:17:32.754 ************************************ 00:17:32.754 END TEST nvmf_delete_subsystem 00:17:32.754 ************************************ 00:17:32.754 10:14:06 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:17:32.754 10:14:06 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:32.754 10:14:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:32.754 10:14:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:32.754 10:14:06 -- common/autotest_common.sh@10 -- # set +x 00:17:32.755 ************************************ 00:17:32.755 START TEST nvmf_nvme_cli 00:17:32.755 ************************************ 00:17:32.755 10:14:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:33.014 * Looking for test storage... 00:17:33.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:33.014 10:14:06 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:33.014 10:14:06 -- nvmf/common.sh@7 -- # uname -s 00:17:33.014 10:14:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.014 10:14:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.014 10:14:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.014 10:14:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.014 10:14:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.014 10:14:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.014 10:14:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.014 10:14:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.014 10:14:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.014 10:14:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.014 10:14:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:33.014 10:14:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:17:33.014 10:14:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.014 10:14:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.014 10:14:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:33.014 10:14:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:33.014 10:14:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.014 10:14:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.014 10:14:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.014 10:14:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.014 10:14:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.014 10:14:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.014 10:14:06 -- paths/export.sh@5 -- # export PATH 00:17:33.014 10:14:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.014 10:14:06 -- nvmf/common.sh@46 -- # : 0 00:17:33.014 10:14:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:33.014 10:14:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:33.014 10:14:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:33.014 10:14:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.014 10:14:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.014 10:14:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:33.014 10:14:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:33.014 10:14:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:33.014 10:14:06 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:33.014 10:14:06 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:33.014 10:14:06 -- target/nvme_cli.sh@14 -- # devs=() 00:17:33.014 10:14:06 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:33.014 10:14:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:33.014 10:14:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.014 10:14:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:33.014 10:14:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:33.014 10:14:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:33.014 10:14:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.014 10:14:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.014 10:14:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.014 10:14:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:33.014 10:14:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:33.014 10:14:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:33.014 10:14:06 -- common/autotest_common.sh@10 -- # set +x 00:17:38.289 10:14:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:38.289 10:14:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:38.289 10:14:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:38.289 10:14:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:38.289 10:14:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:38.289 10:14:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:38.289 10:14:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:38.289 10:14:11 -- nvmf/common.sh@294 -- # net_devs=() 00:17:38.289 10:14:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:38.289 10:14:11 -- nvmf/common.sh@295 -- # e810=() 00:17:38.289 10:14:11 -- nvmf/common.sh@295 -- # local -ga e810 00:17:38.289 10:14:11 -- nvmf/common.sh@296 -- # x722=() 00:17:38.289 10:14:11 -- nvmf/common.sh@296 -- # local -ga x722 00:17:38.289 10:14:11 -- nvmf/common.sh@297 -- # mlx=() 00:17:38.289 10:14:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:38.289 10:14:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:38.289 10:14:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:38.289 10:14:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:38.289 10:14:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:38.289 10:14:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:38.289 10:14:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:38.289 10:14:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:38.289 10:14:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:38.289 10:14:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:38.289 10:14:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:38.289 10:14:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:38.289 10:14:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:38.289 10:14:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:38.289 10:14:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:38.548 10:14:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:38.548 10:14:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:38.548 10:14:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:38.548 10:14:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:38.548 10:14:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:38.548 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:38.548 10:14:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:38.548 10:14:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:38.548 10:14:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.548 10:14:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.548 10:14:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:38.548 10:14:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:38.548 10:14:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:38.548 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:38.548 10:14:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:38.548 10:14:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:38.548 10:14:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.548 10:14:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.548 10:14:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:38.548 10:14:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:38.548 10:14:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:38.548 10:14:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:38.548 10:14:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:38.548 10:14:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.548 10:14:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:38.548 10:14:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.548 10:14:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:38.548 Found net devices under 0000:af:00.0: cvl_0_0 00:17:38.548 10:14:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.548 10:14:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:38.548 10:14:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.548 10:14:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:38.548 10:14:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.548 10:14:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:38.548 Found net devices under 0000:af:00.1: cvl_0_1 00:17:38.548 10:14:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.548 10:14:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:38.548 10:14:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:38.548 10:14:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:38.548 10:14:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:38.548 10:14:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:38.548 10:14:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:38.548 10:14:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:38.548 10:14:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:38.548 10:14:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:38.548 10:14:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:38.548 10:14:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:38.548 10:14:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:38.548 10:14:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:38.548 10:14:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.548 10:14:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:38.548 10:14:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:38.548 10:14:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:38.548 10:14:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:38.548 10:14:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:38.548 10:14:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:38.548 10:14:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:38.548 10:14:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:38.548 10:14:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:38.548 10:14:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:38.548 10:14:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:38.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:17:38.548 00:17:38.548 --- 10.0.0.2 ping statistics --- 00:17:38.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.548 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:17:38.548 10:14:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:38.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:17:38.807 00:17:38.807 --- 10.0.0.1 ping statistics --- 00:17:38.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.807 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:17:38.807 10:14:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.807 10:14:11 -- nvmf/common.sh@410 -- # return 0 00:17:38.807 10:14:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:38.807 10:14:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.807 10:14:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:38.807 10:14:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:38.807 10:14:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.807 10:14:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:38.807 10:14:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:38.807 10:14:11 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:38.807 10:14:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:38.807 10:14:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:38.807 10:14:11 -- common/autotest_common.sh@10 -- # set +x 00:17:38.807 10:14:11 -- nvmf/common.sh@469 -- # nvmfpid=3416738 00:17:38.807 10:14:11 -- nvmf/common.sh@470 -- # waitforlisten 3416738 00:17:38.807 10:14:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:38.807 10:14:11 -- common/autotest_common.sh@819 -- # '[' -z 3416738 ']' 00:17:38.807 10:14:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.807 10:14:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:38.807 10:14:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.807 10:14:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:38.807 10:14:11 -- common/autotest_common.sh@10 -- # set +x 00:17:38.807 [2024-04-17 10:14:11.970393] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:38.807 [2024-04-17 10:14:11.970447] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.807 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.807 [2024-04-17 10:14:12.062310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:39.066 [2024-04-17 10:14:12.152545] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:39.066 [2024-04-17 10:14:12.152693] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.066 [2024-04-17 10:14:12.152704] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.066 [2024-04-17 10:14:12.152714] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.066 [2024-04-17 10:14:12.152762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.066 [2024-04-17 10:14:12.152863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.066 [2024-04-17 10:14:12.152968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.066 [2024-04-17 10:14:12.152968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:39.633 10:14:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:39.633 10:14:12 -- common/autotest_common.sh@852 -- # return 0 00:17:39.633 10:14:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:39.633 10:14:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:39.633 10:14:12 -- common/autotest_common.sh@10 -- # set +x 00:17:39.633 10:14:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.633 10:14:12 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:39.633 10:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:39.633 10:14:12 -- common/autotest_common.sh@10 -- # set +x 00:17:39.633 [2024-04-17 10:14:12.868190] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.633 10:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:39.633 10:14:12 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:39.633 10:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:39.633 10:14:12 -- common/autotest_common.sh@10 -- # set +x 00:17:39.633 Malloc0 00:17:39.633 10:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:39.633 10:14:12 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:39.633 10:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:39.633 10:14:12 -- common/autotest_common.sh@10 -- # set +x 00:17:39.633 Malloc1 00:17:39.633 10:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:39.633 10:14:12 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:39.633 10:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:39.633 10:14:12 -- common/autotest_common.sh@10 -- # set +x 00:17:39.633 10:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:39.633 10:14:12 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:39.633 10:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:39.633 10:14:12 -- common/autotest_common.sh@10 -- # set +x 00:17:39.633 10:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:39.633 10:14:12 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:39.633 10:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:39.633 10:14:12 -- common/autotest_common.sh@10 -- # set +x 00:17:39.633 10:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:39.633 10:14:12 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:39.633 10:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:39.633 10:14:12 -- common/autotest_common.sh@10 -- # set +x 00:17:39.633 [2024-04-17 10:14:12.955181] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.633 10:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:39.633 10:14:12 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:39.633 10:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:39.633 10:14:12 -- common/autotest_common.sh@10 -- # set +x 00:17:39.892 10:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:39.892 10:14:12 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:17:39.892 00:17:39.892 Discovery Log Number of Records 2, Generation counter 2 00:17:39.892 =====Discovery Log Entry 0====== 00:17:39.892 trtype: tcp 00:17:39.892 adrfam: ipv4 00:17:39.892 subtype: current discovery subsystem 00:17:39.892 treq: not required 00:17:39.892 portid: 0 00:17:39.892 trsvcid: 4420 00:17:39.892 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:39.892 traddr: 10.0.0.2 00:17:39.892 eflags: explicit discovery connections, duplicate discovery information 00:17:39.892 sectype: none 00:17:39.892 =====Discovery Log Entry 1====== 00:17:39.892 trtype: tcp 00:17:39.892 adrfam: ipv4 00:17:39.892 subtype: nvme subsystem 00:17:39.892 treq: not required 00:17:39.892 portid: 0 00:17:39.892 trsvcid: 4420 00:17:39.892 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:39.892 traddr: 10.0.0.2 00:17:39.892 eflags: none 00:17:39.892 sectype: none 00:17:39.892 10:14:13 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:39.892 10:14:13 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:39.892 10:14:13 -- nvmf/common.sh@510 -- # local dev _ 00:17:39.892 10:14:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:39.892 10:14:13 -- nvmf/common.sh@509 -- # nvme list 00:17:39.892 10:14:13 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:39.892 10:14:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:39.892 10:14:13 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:39.892 10:14:13 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:39.892 10:14:13 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:39.892 10:14:13 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:41.266 10:14:14 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:41.266 10:14:14 -- common/autotest_common.sh@1177 -- # local i=0 00:17:41.266 10:14:14 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:41.266 10:14:14 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:17:41.266 10:14:14 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:17:41.266 10:14:14 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:43.166 10:14:16 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:43.166 10:14:16 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:43.166 10:14:16 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:17:43.166 10:14:16 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:17:43.166 10:14:16 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:43.166 10:14:16 -- common/autotest_common.sh@1187 -- # return 0 00:17:43.166 10:14:16 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:43.166 10:14:16 -- nvmf/common.sh@510 -- # local dev _ 00:17:43.166 10:14:16 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:43.166 10:14:16 -- nvmf/common.sh@509 -- # nvme list 00:17:43.424 10:14:16 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:43.424 10:14:16 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:43.424 10:14:16 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:43.424 10:14:16 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:43.424 10:14:16 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:43.424 10:14:16 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:17:43.424 10:14:16 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:43.424 10:14:16 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:43.424 10:14:16 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:17:43.424 10:14:16 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:43.424 10:14:16 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:17:43.424 /dev/nvme0n1 ]] 00:17:43.424 10:14:16 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:43.424 10:14:16 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:43.424 10:14:16 -- nvmf/common.sh@510 -- # local dev _ 00:17:43.424 10:14:16 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:43.424 10:14:16 -- nvmf/common.sh@509 -- # nvme list 00:17:43.424 10:14:16 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:43.424 10:14:16 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:43.424 10:14:16 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:43.424 10:14:16 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:43.424 10:14:16 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:43.424 10:14:16 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:17:43.424 10:14:16 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:43.424 10:14:16 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:43.424 10:14:16 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:17:43.424 10:14:16 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:43.424 10:14:16 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:43.424 10:14:16 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:43.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:43.683 10:14:16 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:43.683 10:14:16 -- common/autotest_common.sh@1198 -- # local i=0 00:17:43.683 10:14:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:17:43.683 10:14:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:43.683 10:14:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:17:43.683 10:14:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:43.683 10:14:17 -- common/autotest_common.sh@1210 -- # return 0 00:17:43.683 10:14:17 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:43.683 10:14:17 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:43.683 10:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:43.683 10:14:17 -- common/autotest_common.sh@10 -- # set +x 00:17:43.941 10:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:43.941 10:14:17 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:43.941 10:14:17 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:43.941 10:14:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:43.941 10:14:17 -- nvmf/common.sh@116 -- # sync 00:17:43.941 10:14:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:43.941 10:14:17 -- nvmf/common.sh@119 -- # set +e 00:17:43.941 10:14:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:43.941 10:14:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:43.941 rmmod nvme_tcp 00:17:43.941 rmmod nvme_fabrics 00:17:43.941 rmmod nvme_keyring 00:17:43.941 10:14:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:43.941 10:14:17 -- nvmf/common.sh@123 -- # set -e 00:17:43.941 10:14:17 -- nvmf/common.sh@124 -- # return 0 00:17:43.941 10:14:17 -- nvmf/common.sh@477 -- # '[' -n 3416738 ']' 00:17:43.941 10:14:17 -- nvmf/common.sh@478 -- # killprocess 3416738 00:17:43.941 10:14:17 -- common/autotest_common.sh@926 -- # '[' -z 3416738 ']' 00:17:43.941 10:14:17 -- common/autotest_common.sh@930 -- # kill -0 3416738 00:17:43.941 10:14:17 -- common/autotest_common.sh@931 -- # uname 00:17:43.941 10:14:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:43.941 10:14:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3416738 00:17:43.941 10:14:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:43.941 10:14:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:43.941 10:14:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3416738' 00:17:43.941 killing process with pid 3416738 00:17:43.941 10:14:17 -- common/autotest_common.sh@945 -- # kill 3416738 00:17:43.941 10:14:17 -- common/autotest_common.sh@950 -- # wait 3416738 00:17:44.199 10:14:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:44.199 10:14:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:44.199 10:14:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:44.199 10:14:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:44.199 10:14:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:44.199 10:14:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.199 10:14:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.199 10:14:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.184 10:14:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:46.184 00:17:46.184 real 0m13.407s 00:17:46.184 user 0m22.414s 00:17:46.184 sys 0m5.011s 00:17:46.184 10:14:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:46.184 10:14:19 -- common/autotest_common.sh@10 -- # set +x 00:17:46.184 ************************************ 00:17:46.184 END TEST nvmf_nvme_cli 00:17:46.184 ************************************ 00:17:46.184 10:14:19 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:17:46.184 10:14:19 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:46.184 10:14:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:46.184 10:14:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:46.184 10:14:19 -- common/autotest_common.sh@10 -- # set +x 00:17:46.443 ************************************ 00:17:46.443 START TEST nvmf_host_management 00:17:46.443 ************************************ 00:17:46.443 10:14:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:46.443 * Looking for test storage... 00:17:46.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:46.443 10:14:19 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:46.443 10:14:19 -- nvmf/common.sh@7 -- # uname -s 00:17:46.443 10:14:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:46.443 10:14:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:46.443 10:14:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:46.443 10:14:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:46.443 10:14:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:46.443 10:14:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:46.443 10:14:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:46.443 10:14:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:46.443 10:14:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:46.443 10:14:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:46.443 10:14:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:46.443 10:14:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:17:46.443 10:14:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:46.443 10:14:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:46.443 10:14:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:46.443 10:14:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:46.443 10:14:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.443 10:14:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.443 10:14:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.443 10:14:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.443 10:14:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.443 10:14:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.443 10:14:19 -- paths/export.sh@5 -- # export PATH 00:17:46.443 10:14:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.443 10:14:19 -- nvmf/common.sh@46 -- # : 0 00:17:46.443 10:14:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:46.443 10:14:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:46.443 10:14:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:46.443 10:14:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.443 10:14:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.443 10:14:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:46.443 10:14:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:46.443 10:14:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:46.443 10:14:19 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:46.443 10:14:19 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:46.443 10:14:19 -- target/host_management.sh@104 -- # nvmftestinit 00:17:46.443 10:14:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:46.443 10:14:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:46.443 10:14:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:46.443 10:14:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:46.443 10:14:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:46.443 10:14:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.443 10:14:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:46.443 10:14:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.443 10:14:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:46.443 10:14:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:46.443 10:14:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:46.443 10:14:19 -- common/autotest_common.sh@10 -- # set +x 00:17:51.707 10:14:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:51.707 10:14:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:51.707 10:14:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:51.707 10:14:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:51.707 10:14:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:51.707 10:14:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:51.707 10:14:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:51.707 10:14:24 -- nvmf/common.sh@294 -- # net_devs=() 00:17:51.707 10:14:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:51.707 10:14:24 -- nvmf/common.sh@295 -- # e810=() 00:17:51.707 10:14:24 -- nvmf/common.sh@295 -- # local -ga e810 00:17:51.707 10:14:24 -- nvmf/common.sh@296 -- # x722=() 00:17:51.707 10:14:24 -- nvmf/common.sh@296 -- # local -ga x722 00:17:51.707 10:14:24 -- nvmf/common.sh@297 -- # mlx=() 00:17:51.707 10:14:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:51.707 10:14:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.707 10:14:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.707 10:14:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.707 10:14:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.707 10:14:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.707 10:14:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.707 10:14:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.707 10:14:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.707 10:14:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.707 10:14:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.707 10:14:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.707 10:14:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:51.707 10:14:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:51.707 10:14:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:51.707 10:14:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:51.707 10:14:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:51.707 10:14:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:51.707 10:14:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:51.707 10:14:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:51.707 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:51.707 10:14:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:51.707 10:14:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:51.707 10:14:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.707 10:14:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.707 10:14:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:51.707 10:14:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:51.707 10:14:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:51.707 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:51.707 10:14:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:51.707 10:14:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:51.707 10:14:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.707 10:14:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.707 10:14:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:51.707 10:14:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:51.707 10:14:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:51.707 10:14:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:51.707 10:14:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:51.707 10:14:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.707 10:14:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:51.707 10:14:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.707 10:14:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:51.707 Found net devices under 0000:af:00.0: cvl_0_0 00:17:51.707 10:14:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.707 10:14:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:51.707 10:14:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.707 10:14:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:51.707 10:14:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.707 10:14:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:51.707 Found net devices under 0000:af:00.1: cvl_0_1 00:17:51.707 10:14:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.707 10:14:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:51.707 10:14:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:51.707 10:14:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:51.707 10:14:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:51.707 10:14:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:51.707 10:14:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.707 10:14:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.707 10:14:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:51.707 10:14:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:51.707 10:14:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:51.707 10:14:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:51.707 10:14:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:51.707 10:14:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:51.707 10:14:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.707 10:14:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:51.707 10:14:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:51.707 10:14:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:51.707 10:14:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:51.707 10:14:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:51.707 10:14:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:51.707 10:14:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:51.707 10:14:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:51.707 10:14:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:51.707 10:14:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:51.707 10:14:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:51.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:17:51.707 00:17:51.707 --- 10.0.0.2 ping statistics --- 00:17:51.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.707 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:17:51.707 10:14:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:51.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:17:51.707 00:17:51.707 --- 10.0.0.1 ping statistics --- 00:17:51.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.707 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:17:51.707 10:14:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.707 10:14:24 -- nvmf/common.sh@410 -- # return 0 00:17:51.707 10:14:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:51.707 10:14:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.707 10:14:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:51.707 10:14:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:51.707 10:14:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.707 10:14:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:51.707 10:14:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:51.707 10:14:24 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:17:51.707 10:14:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:51.707 10:14:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:51.707 10:14:24 -- common/autotest_common.sh@10 -- # set +x 00:17:51.707 ************************************ 00:17:51.707 START TEST nvmf_host_management 00:17:51.707 ************************************ 00:17:51.707 10:14:24 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:17:51.708 10:14:24 -- target/host_management.sh@69 -- # starttarget 00:17:51.708 10:14:24 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:51.708 10:14:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:51.708 10:14:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:51.708 10:14:24 -- common/autotest_common.sh@10 -- # set +x 00:17:51.708 10:14:24 -- nvmf/common.sh@469 -- # nvmfpid=3421335 00:17:51.708 10:14:24 -- nvmf/common.sh@470 -- # waitforlisten 3421335 00:17:51.708 10:14:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:51.708 10:14:24 -- common/autotest_common.sh@819 -- # '[' -z 3421335 ']' 00:17:51.708 10:14:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.708 10:14:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:51.708 10:14:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.708 10:14:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:51.708 10:14:24 -- common/autotest_common.sh@10 -- # set +x 00:17:51.708 [2024-04-17 10:14:24.894072] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:51.708 [2024-04-17 10:14:24.894131] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.708 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.708 [2024-04-17 10:14:24.971904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:51.966 [2024-04-17 10:14:25.057195] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:51.966 [2024-04-17 10:14:25.057335] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.966 [2024-04-17 10:14:25.057347] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.966 [2024-04-17 10:14:25.057356] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.966 [2024-04-17 10:14:25.057463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.966 [2024-04-17 10:14:25.057578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:51.966 [2024-04-17 10:14:25.057690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:51.966 [2024-04-17 10:14:25.057692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.531 10:14:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:52.531 10:14:25 -- common/autotest_common.sh@852 -- # return 0 00:17:52.531 10:14:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:52.531 10:14:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:52.531 10:14:25 -- common/autotest_common.sh@10 -- # set +x 00:17:52.788 10:14:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.788 10:14:25 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:52.788 10:14:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.788 10:14:25 -- common/autotest_common.sh@10 -- # set +x 00:17:52.788 [2024-04-17 10:14:25.868578] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:52.788 10:14:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.788 10:14:25 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:52.788 10:14:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:52.788 10:14:25 -- common/autotest_common.sh@10 -- # set +x 00:17:52.788 10:14:25 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:52.788 10:14:25 -- target/host_management.sh@23 -- # cat 00:17:52.788 10:14:25 -- target/host_management.sh@30 -- # rpc_cmd 00:17:52.788 10:14:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.788 10:14:25 -- common/autotest_common.sh@10 -- # set +x 00:17:52.788 Malloc0 00:17:52.788 [2024-04-17 10:14:25.932520] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:52.788 10:14:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.788 10:14:25 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:52.788 10:14:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:52.788 10:14:25 -- common/autotest_common.sh@10 -- # set +x 00:17:52.788 10:14:25 -- target/host_management.sh@73 -- # perfpid=3421635 00:17:52.788 10:14:25 -- target/host_management.sh@74 -- # waitforlisten 3421635 /var/tmp/bdevperf.sock 00:17:52.788 10:14:25 -- common/autotest_common.sh@819 -- # '[' -z 3421635 ']' 00:17:52.788 10:14:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:52.788 10:14:25 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:52.788 10:14:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:52.788 10:14:25 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:52.788 10:14:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:52.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:52.788 10:14:25 -- nvmf/common.sh@520 -- # config=() 00:17:52.788 10:14:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:52.788 10:14:25 -- nvmf/common.sh@520 -- # local subsystem config 00:17:52.789 10:14:25 -- common/autotest_common.sh@10 -- # set +x 00:17:52.789 10:14:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:52.789 10:14:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:52.789 { 00:17:52.789 "params": { 00:17:52.789 "name": "Nvme$subsystem", 00:17:52.789 "trtype": "$TEST_TRANSPORT", 00:17:52.789 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:52.789 "adrfam": "ipv4", 00:17:52.789 "trsvcid": "$NVMF_PORT", 00:17:52.789 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:52.789 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:52.789 "hdgst": ${hdgst:-false}, 00:17:52.789 "ddgst": ${ddgst:-false} 00:17:52.789 }, 00:17:52.789 "method": "bdev_nvme_attach_controller" 00:17:52.789 } 00:17:52.789 EOF 00:17:52.789 )") 00:17:52.789 10:14:25 -- nvmf/common.sh@542 -- # cat 00:17:52.789 10:14:25 -- nvmf/common.sh@544 -- # jq . 00:17:52.789 10:14:25 -- nvmf/common.sh@545 -- # IFS=, 00:17:52.789 10:14:25 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:52.789 "params": { 00:17:52.789 "name": "Nvme0", 00:17:52.789 "trtype": "tcp", 00:17:52.789 "traddr": "10.0.0.2", 00:17:52.789 "adrfam": "ipv4", 00:17:52.789 "trsvcid": "4420", 00:17:52.789 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:52.789 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:52.789 "hdgst": false, 00:17:52.789 "ddgst": false 00:17:52.789 }, 00:17:52.789 "method": "bdev_nvme_attach_controller" 00:17:52.789 }' 00:17:52.789 [2024-04-17 10:14:26.024884] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:52.789 [2024-04-17 10:14:26.024941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421635 ] 00:17:52.789 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.789 [2024-04-17 10:14:26.104637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.046 [2024-04-17 10:14:26.189934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.304 Running I/O for 10 seconds... 00:17:53.872 10:14:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:53.873 10:14:26 -- common/autotest_common.sh@852 -- # return 0 00:17:53.873 10:14:26 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:53.873 10:14:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:53.873 10:14:26 -- common/autotest_common.sh@10 -- # set +x 00:17:53.873 10:14:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:53.873 10:14:26 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:53.873 10:14:26 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:53.873 10:14:26 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:53.873 10:14:26 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:53.873 10:14:26 -- target/host_management.sh@52 -- # local ret=1 00:17:53.873 10:14:26 -- target/host_management.sh@53 -- # local i 00:17:53.873 10:14:26 -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:53.873 10:14:26 -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:53.873 10:14:26 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:53.873 10:14:26 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:53.873 10:14:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:53.873 10:14:26 -- common/autotest_common.sh@10 -- # set +x 00:17:53.873 10:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:53.873 10:14:27 -- target/host_management.sh@55 -- # read_io_count=1730 00:17:53.873 10:14:27 -- target/host_management.sh@58 -- # '[' 1730 -ge 100 ']' 00:17:53.873 10:14:27 -- target/host_management.sh@59 -- # ret=0 00:17:53.873 10:14:27 -- target/host_management.sh@60 -- # break 00:17:53.873 10:14:27 -- target/host_management.sh@64 -- # return 0 00:17:53.873 10:14:27 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:53.873 10:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:53.873 10:14:27 -- common/autotest_common.sh@10 -- # set +x 00:17:53.873 [2024-04-17 10:14:27.041032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041327] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041363] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.041563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed5d0 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.042855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.873 [2024-04-17 10:14:27.042893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.873 [2024-04-17 10:14:27.042907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.873 [2024-04-17 10:14:27.042923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.873 [2024-04-17 10:14:27.042934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.873 [2024-04-17 10:14:27.042943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.873 [2024-04-17 10:14:27.042954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.873 [2024-04-17 10:14:27.042964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.873 [2024-04-17 10:14:27.042973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957e40 is same with the state(5) to be set 00:17:53.873 [2024-04-17 10:14:27.043023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.873 [2024-04-17 10:14:27.043036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.873 [2024-04-17 10:14:27.043052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.874 [2024-04-17 10:14:27.043886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.874 [2024-04-17 10:14:27.043900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.875 [2024-04-17 10:14:27.043912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.875 [2024-04-17 10:14:27.043925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.875 [2024-04-17 10:14:27.043938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.875 [2024-04-17 10:14:27.043952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.875 [2024-04-17 10:14:27.043963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.875 [2024-04-17 10:14:27.043975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.875 [2024-04-17 10:14:27.043985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.875 [2024-04-17 10:14:27.043998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.875 [2024-04-17 10:14:27.044007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.875 [2024-04-17 10:14:27.044019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.875 [2024-04-17 10:14:27.044028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.875 [2024-04-17 10:14:27.044040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.875 [2024-04-17 10:14:27.044050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.875 [2024-04-17 10:14:27.044061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.875 [2024-04-17 10:14:27.044071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.875 [2024-04-17 10:14:27.044084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.875 [2024-04-17 10:14:27.044094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.875 [2024-04-17 10:14:27.044105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.875 [2024-04-17 10:14:27.044114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.875 [2024-04-17 10:14:27.044126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.875 [2024-04-17 10:14:27.044135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.875 [2024-04-17 10:14:27.044147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.875 [2024-04-17 10:14:27.044156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.875 [2024-04-17 10:14:27.044168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.875 [2024-04-17 10:14:27.044177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.875 [2024-04-17 10:14:27.044188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.875 [2024-04-17 10:14:27.044198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.875 [2024-04-17 10:14:27.044210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.875 [2024-04-17 10:14:27.044220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.875 [2024-04-17 10:14:27.044232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.875 [2024-04-17 10:14:27.044241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.875 [2024-04-17 10:14:27.044253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.875 [2024-04-17 10:14:27.044262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.875 [2024-04-17 10:14:27.044274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.875 [2024-04-17 10:14:27.044283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.875 [2024-04-17 10:14:27.044295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.875 [2024-04-17 10:14:27.044304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.875 [2024-04-17 10:14:27.044315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.875 [2024-04-17 10:14:27.044325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.875 [2024-04-17 10:14:27.044336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.875 [2024-04-17 10:14:27.044347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.875 [2024-04-17 10:14:27.044359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.875 [2024-04-17 10:14:27.044368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.875 [2024-04-17 10:14:27.044380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.875 [2024-04-17 10:14:27.044390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.875 [2024-04-17 10:14:27.044401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.875 [2024-04-17 10:14:27.044411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.875 [2024-04-17 10:14:27.044423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.875 [2024-04-17 10:14:27.044433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.875 [2024-04-17 10:14:27.044512] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19556b0 was disconnected and freed. reset controller. 00:17:53.875 10:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:53.875 [2024-04-17 10:14:27.045879] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:53.875 10:14:27 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:53.875 10:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:53.875 10:14:27 -- common/autotest_common.sh@10 -- # set +x 00:17:53.875 task offset: 109568 on job bdev=Nvme0n1 fails 00:17:53.875 00:17:53.875 Latency(us) 00:17:53.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.875 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:53.875 Job: Nvme0n1 ended in about 0.66 seconds with error 00:17:53.875 Verification LBA range: start 0x0 length 0x400 00:17:53.875 Nvme0n1 : 0.66 2837.27 177.33 97.57 0.00 21437.11 2055.45 29908.25 00:17:53.875 =================================================================================================================== 00:17:53.875 Total : 2837.27 177.33 97.57 0.00 21437.11 2055.45 29908.25 00:17:53.875 [2024-04-17 10:14:27.048166] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:53.875 [2024-04-17 10:14:27.048184] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1957e40 (9): Bad file descriptor 00:17:53.875 10:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:53.875 10:14:27 -- target/host_management.sh@87 -- # sleep 1 00:17:53.875 [2024-04-17 10:14:27.058998] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:54.808 10:14:28 -- target/host_management.sh@91 -- # kill -9 3421635 00:17:54.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3421635) - No such process 00:17:54.808 10:14:28 -- target/host_management.sh@91 -- # true 00:17:54.808 10:14:28 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:54.808 10:14:28 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:54.808 10:14:28 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:54.808 10:14:28 -- nvmf/common.sh@520 -- # config=() 00:17:54.808 10:14:28 -- nvmf/common.sh@520 -- # local subsystem config 00:17:54.808 10:14:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:54.808 10:14:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:54.808 { 00:17:54.808 "params": { 00:17:54.808 "name": "Nvme$subsystem", 00:17:54.808 "trtype": "$TEST_TRANSPORT", 00:17:54.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:54.808 "adrfam": "ipv4", 00:17:54.808 "trsvcid": "$NVMF_PORT", 00:17:54.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:54.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:54.808 "hdgst": ${hdgst:-false}, 00:17:54.808 "ddgst": ${ddgst:-false} 00:17:54.809 }, 00:17:54.809 "method": "bdev_nvme_attach_controller" 00:17:54.809 } 00:17:54.809 EOF 00:17:54.809 )") 00:17:54.809 10:14:28 -- nvmf/common.sh@542 -- # cat 00:17:54.809 10:14:28 -- nvmf/common.sh@544 -- # jq . 00:17:54.809 10:14:28 -- nvmf/common.sh@545 -- # IFS=, 00:17:54.809 10:14:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:54.809 "params": { 00:17:54.809 "name": "Nvme0", 00:17:54.809 "trtype": "tcp", 00:17:54.809 "traddr": "10.0.0.2", 00:17:54.809 "adrfam": "ipv4", 00:17:54.809 "trsvcid": "4420", 00:17:54.809 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:54.809 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:54.809 "hdgst": false, 00:17:54.809 "ddgst": false 00:17:54.809 }, 00:17:54.809 "method": "bdev_nvme_attach_controller" 00:17:54.809 }' 00:17:54.809 [2024-04-17 10:14:28.112028] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:54.809 [2024-04-17 10:14:28.112087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421928 ] 00:17:55.067 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.067 [2024-04-17 10:14:28.193905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.067 [2024-04-17 10:14:28.277799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.325 Running I/O for 1 seconds... 00:17:56.259 00:17:56.259 Latency(us) 00:17:56.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.259 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:56.259 Verification LBA range: start 0x0 length 0x400 00:17:56.259 Nvme0n1 : 1.01 2870.82 179.43 0.00 0.00 21934.46 1251.14 27763.43 00:17:56.259 =================================================================================================================== 00:17:56.259 Total : 2870.82 179.43 0.00 0.00 21934.46 1251.14 27763.43 00:17:56.517 10:14:29 -- target/host_management.sh@101 -- # stoptarget 00:17:56.517 10:14:29 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:56.517 10:14:29 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:56.517 10:14:29 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:56.517 10:14:29 -- target/host_management.sh@40 -- # nvmftestfini 00:17:56.517 10:14:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:56.517 10:14:29 -- nvmf/common.sh@116 -- # sync 00:17:56.517 10:14:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:56.517 10:14:29 -- nvmf/common.sh@119 -- # set +e 00:17:56.517 10:14:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:56.517 10:14:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:56.517 rmmod nvme_tcp 00:17:56.517 rmmod nvme_fabrics 00:17:56.517 rmmod nvme_keyring 00:17:56.517 10:14:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:56.517 10:14:29 -- nvmf/common.sh@123 -- # set -e 00:17:56.517 10:14:29 -- nvmf/common.sh@124 -- # return 0 00:17:56.517 10:14:29 -- nvmf/common.sh@477 -- # '[' -n 3421335 ']' 00:17:56.517 10:14:29 -- nvmf/common.sh@478 -- # killprocess 3421335 00:17:56.517 10:14:29 -- common/autotest_common.sh@926 -- # '[' -z 3421335 ']' 00:17:56.517 10:14:29 -- common/autotest_common.sh@930 -- # kill -0 3421335 00:17:56.517 10:14:29 -- common/autotest_common.sh@931 -- # uname 00:17:56.517 10:14:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:56.517 10:14:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3421335 00:17:56.518 10:14:29 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:56.518 10:14:29 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:56.518 10:14:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3421335' 00:17:56.518 killing process with pid 3421335 00:17:56.518 10:14:29 -- common/autotest_common.sh@945 -- # kill 3421335 00:17:56.518 10:14:29 -- common/autotest_common.sh@950 -- # wait 3421335 00:17:56.775 [2024-04-17 10:14:30.030077] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:56.775 10:14:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:56.775 10:14:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:56.775 10:14:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:56.775 10:14:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:56.775 10:14:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:56.775 10:14:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.775 10:14:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:56.775 10:14:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.306 10:14:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:59.306 00:17:59.306 real 0m7.289s 00:17:59.306 user 0m22.685s 00:17:59.306 sys 0m1.280s 00:17:59.306 10:14:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:59.306 10:14:32 -- common/autotest_common.sh@10 -- # set +x 00:17:59.306 ************************************ 00:17:59.306 END TEST nvmf_host_management 00:17:59.306 ************************************ 00:17:59.306 10:14:32 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:59.306 00:17:59.306 real 0m12.644s 00:17:59.306 user 0m23.947s 00:17:59.306 sys 0m5.084s 00:17:59.306 10:14:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:59.306 10:14:32 -- common/autotest_common.sh@10 -- # set +x 00:17:59.306 ************************************ 00:17:59.306 END TEST nvmf_host_management 00:17:59.306 ************************************ 00:17:59.306 10:14:32 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:59.306 10:14:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:59.306 10:14:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:59.306 10:14:32 -- common/autotest_common.sh@10 -- # set +x 00:17:59.306 ************************************ 00:17:59.306 START TEST nvmf_lvol 00:17:59.306 ************************************ 00:17:59.306 10:14:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:59.306 * Looking for test storage... 00:17:59.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:59.306 10:14:32 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:59.306 10:14:32 -- nvmf/common.sh@7 -- # uname -s 00:17:59.306 10:14:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:59.306 10:14:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:59.306 10:14:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:59.306 10:14:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:59.306 10:14:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:59.306 10:14:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:59.306 10:14:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:59.306 10:14:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:59.306 10:14:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:59.306 10:14:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:59.306 10:14:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:59.306 10:14:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:17:59.306 10:14:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:59.306 10:14:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:59.306 10:14:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:59.306 10:14:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:59.306 10:14:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:59.306 10:14:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:59.306 10:14:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:59.306 10:14:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.306 10:14:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.306 10:14:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.306 10:14:32 -- paths/export.sh@5 -- # export PATH 00:17:59.306 10:14:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.306 10:14:32 -- nvmf/common.sh@46 -- # : 0 00:17:59.306 10:14:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:59.306 10:14:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:59.306 10:14:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:59.306 10:14:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:59.306 10:14:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:59.306 10:14:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:59.306 10:14:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:59.306 10:14:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:59.306 10:14:32 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:59.306 10:14:32 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:59.306 10:14:32 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:59.306 10:14:32 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:59.306 10:14:32 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:59.306 10:14:32 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:59.306 10:14:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:59.306 10:14:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:59.306 10:14:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:59.306 10:14:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:59.306 10:14:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:59.306 10:14:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.306 10:14:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:59.306 10:14:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.306 10:14:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:59.306 10:14:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:59.306 10:14:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:59.306 10:14:32 -- common/autotest_common.sh@10 -- # set +x 00:18:04.664 10:14:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:04.664 10:14:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:04.664 10:14:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:04.664 10:14:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:04.664 10:14:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:04.664 10:14:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:04.664 10:14:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:04.664 10:14:37 -- nvmf/common.sh@294 -- # net_devs=() 00:18:04.664 10:14:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:04.664 10:14:37 -- nvmf/common.sh@295 -- # e810=() 00:18:04.664 10:14:37 -- nvmf/common.sh@295 -- # local -ga e810 00:18:04.664 10:14:37 -- nvmf/common.sh@296 -- # x722=() 00:18:04.664 10:14:37 -- nvmf/common.sh@296 -- # local -ga x722 00:18:04.664 10:14:37 -- nvmf/common.sh@297 -- # mlx=() 00:18:04.664 10:14:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:04.664 10:14:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:04.664 10:14:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:04.664 10:14:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:04.664 10:14:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:04.664 10:14:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:04.664 10:14:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:04.664 10:14:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:04.664 10:14:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:04.664 10:14:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:04.665 10:14:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:04.665 10:14:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:04.665 10:14:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:04.665 10:14:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:04.665 10:14:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:04.665 10:14:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:04.665 10:14:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:04.665 10:14:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:04.665 10:14:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:04.665 10:14:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:04.665 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:04.665 10:14:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:04.665 10:14:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:04.665 10:14:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:04.665 10:14:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:04.665 10:14:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:04.665 10:14:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:04.665 10:14:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:04.665 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:04.665 10:14:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:04.665 10:14:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:04.665 10:14:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:04.665 10:14:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:04.665 10:14:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:04.665 10:14:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:04.665 10:14:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:04.665 10:14:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:04.665 10:14:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:04.665 10:14:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:04.665 10:14:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:04.665 10:14:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:04.665 10:14:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:04.665 Found net devices under 0000:af:00.0: cvl_0_0 00:18:04.665 10:14:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:04.665 10:14:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:04.665 10:14:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:04.665 10:14:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:04.665 10:14:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:04.665 10:14:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:04.665 Found net devices under 0000:af:00.1: cvl_0_1 00:18:04.665 10:14:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:04.665 10:14:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:04.665 10:14:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:04.665 10:14:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:04.665 10:14:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:04.665 10:14:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:04.665 10:14:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:04.665 10:14:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:04.665 10:14:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:04.665 10:14:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:04.665 10:14:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:04.665 10:14:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:04.665 10:14:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:04.665 10:14:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:04.665 10:14:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:04.665 10:14:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:04.665 10:14:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:04.665 10:14:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:04.665 10:14:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:04.665 10:14:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:04.665 10:14:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:04.665 10:14:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:04.665 10:14:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:04.665 10:14:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:04.665 10:14:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:04.665 10:14:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:04.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:04.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:18:04.665 00:18:04.665 --- 10.0.0.2 ping statistics --- 00:18:04.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.665 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:18:04.665 10:14:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:04.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:04.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:18:04.665 00:18:04.665 --- 10.0.0.1 ping statistics --- 00:18:04.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.665 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:18:04.665 10:14:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:04.665 10:14:37 -- nvmf/common.sh@410 -- # return 0 00:18:04.665 10:14:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:04.665 10:14:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:04.665 10:14:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:04.665 10:14:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:04.665 10:14:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:04.665 10:14:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:04.665 10:14:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:04.665 10:14:37 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:18:04.665 10:14:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:04.665 10:14:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:04.665 10:14:37 -- common/autotest_common.sh@10 -- # set +x 00:18:04.665 10:14:37 -- nvmf/common.sh@469 -- # nvmfpid=3425767 00:18:04.665 10:14:37 -- nvmf/common.sh@470 -- # waitforlisten 3425767 00:18:04.665 10:14:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:04.665 10:14:37 -- common/autotest_common.sh@819 -- # '[' -z 3425767 ']' 00:18:04.665 10:14:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.665 10:14:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:04.665 10:14:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.665 10:14:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:04.665 10:14:37 -- common/autotest_common.sh@10 -- # set +x 00:18:04.665 [2024-04-17 10:14:37.369036] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:04.665 [2024-04-17 10:14:37.369089] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.665 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.665 [2024-04-17 10:14:37.455996] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:04.665 [2024-04-17 10:14:37.544294] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:04.665 [2024-04-17 10:14:37.544443] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:04.665 [2024-04-17 10:14:37.544455] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:04.665 [2024-04-17 10:14:37.544465] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:04.665 [2024-04-17 10:14:37.544510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.665 [2024-04-17 10:14:37.544611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.665 [2024-04-17 10:14:37.544612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.922 10:14:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:04.922 10:14:38 -- common/autotest_common.sh@852 -- # return 0 00:18:04.922 10:14:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:04.922 10:14:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:04.922 10:14:38 -- common/autotest_common.sh@10 -- # set +x 00:18:05.179 10:14:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.179 10:14:38 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:05.179 [2024-04-17 10:14:38.488515] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:05.436 10:14:38 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:05.693 10:14:38 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:18:05.693 10:14:38 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:05.949 10:14:39 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:18:05.949 10:14:39 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:18:06.206 10:14:39 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:18:06.463 10:14:39 -- target/nvmf_lvol.sh@29 -- # lvs=5a33f040-c290-4d4d-8c62-0b029c3444aa 00:18:06.463 10:14:39 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5a33f040-c290-4d4d-8c62-0b029c3444aa lvol 20 00:18:06.720 10:14:39 -- target/nvmf_lvol.sh@32 -- # lvol=e7463381-aef5-48bb-9d48-2e982fdc7aa0 00:18:06.720 10:14:39 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:06.979 10:14:40 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e7463381-aef5-48bb-9d48-2e982fdc7aa0 00:18:06.979 10:14:40 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:07.237 [2024-04-17 10:14:40.524029] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:07.237 10:14:40 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:07.495 10:14:40 -- target/nvmf_lvol.sh@42 -- # perf_pid=3426514 00:18:07.495 10:14:40 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:18:07.495 10:14:40 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:18:07.752 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.686 10:14:41 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e7463381-aef5-48bb-9d48-2e982fdc7aa0 MY_SNAPSHOT 00:18:08.944 10:14:42 -- target/nvmf_lvol.sh@47 -- # snapshot=c0c457e6-993e-4d55-a7ca-d47d240ebbe1 00:18:08.944 10:14:42 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e7463381-aef5-48bb-9d48-2e982fdc7aa0 30 00:18:09.202 10:14:42 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c0c457e6-993e-4d55-a7ca-d47d240ebbe1 MY_CLONE 00:18:09.461 10:14:42 -- target/nvmf_lvol.sh@49 -- # clone=5d555827-e138-45cf-9143-d4577f207457 00:18:09.461 10:14:42 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 5d555827-e138-45cf-9143-d4577f207457 00:18:10.028 10:14:43 -- target/nvmf_lvol.sh@53 -- # wait 3426514 00:18:18.136 Initializing NVMe Controllers 00:18:18.136 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:18:18.136 Controller IO queue size 128, less than required. 00:18:18.136 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:18.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:18:18.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:18:18.136 Initialization complete. Launching workers. 00:18:18.136 ======================================================== 00:18:18.136 Latency(us) 00:18:18.136 Device Information : IOPS MiB/s Average min max 00:18:18.136 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8892.50 34.74 14399.95 1470.05 137323.94 00:18:18.136 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8830.70 34.49 14497.10 3503.93 63608.98 00:18:18.136 ======================================================== 00:18:18.136 Total : 17723.20 69.23 14448.35 1470.05 137323.94 00:18:18.136 00:18:18.136 10:14:51 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:18.136 10:14:51 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e7463381-aef5-48bb-9d48-2e982fdc7aa0 00:18:18.394 10:14:51 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5a33f040-c290-4d4d-8c62-0b029c3444aa 00:18:18.651 10:14:51 -- target/nvmf_lvol.sh@60 -- # rm -f 00:18:18.651 10:14:51 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:18:18.651 10:14:51 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:18:18.651 10:14:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:18.651 10:14:51 -- nvmf/common.sh@116 -- # sync 00:18:18.651 10:14:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:18.651 10:14:51 -- nvmf/common.sh@119 -- # set +e 00:18:18.651 10:14:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:18.651 10:14:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:18.651 rmmod nvme_tcp 00:18:18.651 rmmod nvme_fabrics 00:18:18.651 rmmod nvme_keyring 00:18:18.909 10:14:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:18.909 10:14:51 -- nvmf/common.sh@123 -- # set -e 00:18:18.909 10:14:51 -- nvmf/common.sh@124 -- # return 0 00:18:18.909 10:14:51 -- nvmf/common.sh@477 -- # '[' -n 3425767 ']' 00:18:18.909 10:14:51 -- nvmf/common.sh@478 -- # killprocess 3425767 00:18:18.909 10:14:51 -- common/autotest_common.sh@926 -- # '[' -z 3425767 ']' 00:18:18.909 10:14:51 -- common/autotest_common.sh@930 -- # kill -0 3425767 00:18:18.909 10:14:52 -- common/autotest_common.sh@931 -- # uname 00:18:18.909 10:14:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:18.909 10:14:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3425767 00:18:18.909 10:14:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:18.909 10:14:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:18.909 10:14:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3425767' 00:18:18.909 killing process with pid 3425767 00:18:18.909 10:14:52 -- common/autotest_common.sh@945 -- # kill 3425767 00:18:18.909 10:14:52 -- common/autotest_common.sh@950 -- # wait 3425767 00:18:19.168 10:14:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:19.168 10:14:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:19.168 10:14:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:19.168 10:14:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:19.168 10:14:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:19.168 10:14:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.168 10:14:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.168 10:14:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.070 10:14:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:21.070 00:18:21.070 real 0m22.176s 00:18:21.070 user 1m7.534s 00:18:21.070 sys 0m6.411s 00:18:21.070 10:14:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:21.070 10:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:21.070 ************************************ 00:18:21.070 END TEST nvmf_lvol 00:18:21.070 ************************************ 00:18:21.329 10:14:54 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:18:21.329 10:14:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:21.329 10:14:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:21.330 10:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:21.330 ************************************ 00:18:21.330 START TEST nvmf_lvs_grow 00:18:21.330 ************************************ 00:18:21.330 10:14:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:18:21.330 * Looking for test storage... 00:18:21.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:21.330 10:14:54 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:21.330 10:14:54 -- nvmf/common.sh@7 -- # uname -s 00:18:21.330 10:14:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:21.330 10:14:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:21.330 10:14:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:21.330 10:14:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:21.330 10:14:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:21.330 10:14:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:21.330 10:14:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:21.330 10:14:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:21.330 10:14:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:21.330 10:14:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:21.330 10:14:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:21.330 10:14:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:18:21.330 10:14:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:21.330 10:14:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:21.330 10:14:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:21.330 10:14:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:21.330 10:14:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:21.330 10:14:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:21.330 10:14:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:21.330 10:14:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.330 10:14:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.330 10:14:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.330 10:14:54 -- paths/export.sh@5 -- # export PATH 00:18:21.330 10:14:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.330 10:14:54 -- nvmf/common.sh@46 -- # : 0 00:18:21.330 10:14:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:21.330 10:14:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:21.330 10:14:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:21.330 10:14:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:21.330 10:14:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:21.330 10:14:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:21.330 10:14:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:21.330 10:14:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:21.330 10:14:54 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:21.330 10:14:54 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:21.330 10:14:54 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:18:21.330 10:14:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:21.330 10:14:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:21.330 10:14:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:21.330 10:14:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:21.330 10:14:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:21.330 10:14:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.330 10:14:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:21.330 10:14:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.330 10:14:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:21.330 10:14:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:21.330 10:14:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:21.330 10:14:54 -- common/autotest_common.sh@10 -- # set +x 00:18:27.889 10:15:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:27.889 10:15:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:27.889 10:15:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:27.889 10:15:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:27.889 10:15:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:27.889 10:15:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:27.889 10:15:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:27.889 10:15:00 -- nvmf/common.sh@294 -- # net_devs=() 00:18:27.889 10:15:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:27.889 10:15:00 -- nvmf/common.sh@295 -- # e810=() 00:18:27.889 10:15:00 -- nvmf/common.sh@295 -- # local -ga e810 00:18:27.889 10:15:00 -- nvmf/common.sh@296 -- # x722=() 00:18:27.889 10:15:00 -- nvmf/common.sh@296 -- # local -ga x722 00:18:27.889 10:15:00 -- nvmf/common.sh@297 -- # mlx=() 00:18:27.889 10:15:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:27.889 10:15:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:27.889 10:15:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:27.889 10:15:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:27.889 10:15:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:27.889 10:15:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:27.889 10:15:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:27.889 10:15:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:27.889 10:15:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:27.889 10:15:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:27.889 10:15:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:27.889 10:15:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:27.889 10:15:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:27.889 10:15:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:27.889 10:15:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:27.889 10:15:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:27.889 10:15:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:27.889 10:15:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:27.889 10:15:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:27.889 10:15:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:27.889 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:27.889 10:15:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:27.889 10:15:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:27.890 10:15:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.890 10:15:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.890 10:15:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:27.890 10:15:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:27.890 10:15:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:27.890 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:27.890 10:15:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:27.890 10:15:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:27.890 10:15:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.890 10:15:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.890 10:15:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:27.890 10:15:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:27.890 10:15:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:27.890 10:15:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:27.890 10:15:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:27.890 10:15:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.890 10:15:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:27.890 10:15:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.890 10:15:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:27.890 Found net devices under 0000:af:00.0: cvl_0_0 00:18:27.890 10:15:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.890 10:15:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:27.890 10:15:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.890 10:15:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:27.890 10:15:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.890 10:15:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:27.890 Found net devices under 0000:af:00.1: cvl_0_1 00:18:27.890 10:15:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.890 10:15:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:27.890 10:15:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:27.890 10:15:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:27.890 10:15:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:27.890 10:15:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:27.890 10:15:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:27.890 10:15:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:27.890 10:15:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:27.890 10:15:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:27.890 10:15:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:27.890 10:15:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:27.890 10:15:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:27.890 10:15:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:27.890 10:15:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:27.890 10:15:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:27.890 10:15:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:27.890 10:15:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:27.890 10:15:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:27.890 10:15:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:27.890 10:15:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:27.890 10:15:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:27.890 10:15:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:27.890 10:15:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:27.890 10:15:00 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:27.890 10:15:00 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:27.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:27.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:18:27.890 00:18:27.890 --- 10.0.0.2 ping statistics --- 00:18:27.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.890 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:18:27.890 10:15:00 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:27.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:27.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:18:27.890 00:18:27.890 --- 10.0.0.1 ping statistics --- 00:18:27.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.890 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:18:27.890 10:15:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:27.890 10:15:00 -- nvmf/common.sh@410 -- # return 0 00:18:27.890 10:15:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:27.890 10:15:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:27.890 10:15:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:27.890 10:15:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:27.890 10:15:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:27.890 10:15:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:27.890 10:15:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:27.890 10:15:00 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:18:27.890 10:15:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:27.890 10:15:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:27.890 10:15:00 -- common/autotest_common.sh@10 -- # set +x 00:18:27.890 10:15:00 -- nvmf/common.sh@469 -- # nvmfpid=3432150 00:18:27.890 10:15:00 -- nvmf/common.sh@470 -- # waitforlisten 3432150 00:18:27.890 10:15:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:27.890 10:15:00 -- common/autotest_common.sh@819 -- # '[' -z 3432150 ']' 00:18:27.890 10:15:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.890 10:15:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:27.890 10:15:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.890 10:15:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:27.890 10:15:00 -- common/autotest_common.sh@10 -- # set +x 00:18:27.890 [2024-04-17 10:15:00.369331] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:27.890 [2024-04-17 10:15:00.369385] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.890 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.890 [2024-04-17 10:15:00.446468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.890 [2024-04-17 10:15:00.534068] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:27.890 [2024-04-17 10:15:00.534217] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.890 [2024-04-17 10:15:00.534228] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.890 [2024-04-17 10:15:00.534237] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.890 [2024-04-17 10:15:00.534258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.147 10:15:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:28.147 10:15:01 -- common/autotest_common.sh@852 -- # return 0 00:18:28.147 10:15:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:28.147 10:15:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:28.147 10:15:01 -- common/autotest_common.sh@10 -- # set +x 00:18:28.147 10:15:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.147 10:15:01 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:28.404 [2024-04-17 10:15:01.560898] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.404 10:15:01 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:18:28.404 10:15:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:28.404 10:15:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:28.404 10:15:01 -- common/autotest_common.sh@10 -- # set +x 00:18:28.404 ************************************ 00:18:28.404 START TEST lvs_grow_clean 00:18:28.404 ************************************ 00:18:28.404 10:15:01 -- common/autotest_common.sh@1104 -- # lvs_grow 00:18:28.404 10:15:01 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:28.404 10:15:01 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:28.404 10:15:01 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:28.404 10:15:01 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:28.404 10:15:01 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:28.405 10:15:01 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:28.405 10:15:01 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:28.405 10:15:01 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:28.405 10:15:01 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:28.662 10:15:01 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:28.662 10:15:01 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:28.919 10:15:02 -- target/nvmf_lvs_grow.sh@28 -- # lvs=452f123f-f233-47c6-91c0-8faa878ec9b5 00:18:28.919 10:15:02 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 452f123f-f233-47c6-91c0-8faa878ec9b5 00:18:28.919 10:15:02 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:29.177 10:15:02 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:29.177 10:15:02 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:29.177 10:15:02 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 452f123f-f233-47c6-91c0-8faa878ec9b5 lvol 150 00:18:29.435 10:15:02 -- target/nvmf_lvs_grow.sh@33 -- # lvol=6e92767b-98a7-4eab-b20b-6c626056ac83 00:18:29.435 10:15:02 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:29.435 10:15:02 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:29.693 [2024-04-17 10:15:02.810053] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:29.693 [2024-04-17 10:15:02.810115] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:29.693 true 00:18:29.693 10:15:02 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 452f123f-f233-47c6-91c0-8faa878ec9b5 00:18:29.693 10:15:02 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:29.950 10:15:03 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:29.950 10:15:03 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:30.207 10:15:03 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6e92767b-98a7-4eab-b20b-6c626056ac83 00:18:30.480 10:15:03 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:30.480 [2024-04-17 10:15:03.769091] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:30.480 10:15:03 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:30.767 10:15:04 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3433060 00:18:30.768 10:15:04 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:30.768 10:15:04 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3433060 /var/tmp/bdevperf.sock 00:18:30.768 10:15:04 -- common/autotest_common.sh@819 -- # '[' -z 3433060 ']' 00:18:30.768 10:15:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:30.768 10:15:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:30.768 10:15:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:30.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:30.768 10:15:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:30.768 10:15:04 -- common/autotest_common.sh@10 -- # set +x 00:18:30.768 10:15:04 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:30.768 [2024-04-17 10:15:04.059928] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:30.768 [2024-04-17 10:15:04.059991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3433060 ] 00:18:30.768 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.050 [2024-04-17 10:15:04.134279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.050 [2024-04-17 10:15:04.219681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.982 10:15:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:31.982 10:15:04 -- common/autotest_common.sh@852 -- # return 0 00:18:31.983 10:15:04 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:32.240 Nvme0n1 00:18:32.240 10:15:05 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:32.503 [ 00:18:32.503 { 00:18:32.503 "name": "Nvme0n1", 00:18:32.503 "aliases": [ 00:18:32.503 "6e92767b-98a7-4eab-b20b-6c626056ac83" 00:18:32.503 ], 00:18:32.503 "product_name": "NVMe disk", 00:18:32.503 "block_size": 4096, 00:18:32.503 "num_blocks": 38912, 00:18:32.503 "uuid": "6e92767b-98a7-4eab-b20b-6c626056ac83", 00:18:32.503 "assigned_rate_limits": { 00:18:32.503 "rw_ios_per_sec": 0, 00:18:32.503 "rw_mbytes_per_sec": 0, 00:18:32.503 "r_mbytes_per_sec": 0, 00:18:32.503 "w_mbytes_per_sec": 0 00:18:32.503 }, 00:18:32.503 "claimed": false, 00:18:32.503 "zoned": false, 00:18:32.503 "supported_io_types": { 00:18:32.503 "read": true, 00:18:32.503 "write": true, 00:18:32.503 "unmap": true, 00:18:32.503 "write_zeroes": true, 00:18:32.503 "flush": true, 00:18:32.503 "reset": true, 00:18:32.503 "compare": true, 00:18:32.503 "compare_and_write": true, 00:18:32.503 "abort": true, 00:18:32.503 "nvme_admin": true, 00:18:32.503 "nvme_io": true 00:18:32.503 }, 00:18:32.503 "driver_specific": { 00:18:32.503 "nvme": [ 00:18:32.503 { 00:18:32.503 "trid": { 00:18:32.503 "trtype": "TCP", 00:18:32.503 "adrfam": "IPv4", 00:18:32.503 "traddr": "10.0.0.2", 00:18:32.503 "trsvcid": "4420", 00:18:32.503 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:32.503 }, 00:18:32.503 "ctrlr_data": { 00:18:32.503 "cntlid": 1, 00:18:32.503 "vendor_id": "0x8086", 00:18:32.503 "model_number": "SPDK bdev Controller", 00:18:32.503 "serial_number": "SPDK0", 00:18:32.503 "firmware_revision": "24.01.1", 00:18:32.503 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:32.503 "oacs": { 00:18:32.503 "security": 0, 00:18:32.503 "format": 0, 00:18:32.503 "firmware": 0, 00:18:32.503 "ns_manage": 0 00:18:32.503 }, 00:18:32.503 "multi_ctrlr": true, 00:18:32.503 "ana_reporting": false 00:18:32.503 }, 00:18:32.503 "vs": { 00:18:32.503 "nvme_version": "1.3" 00:18:32.503 }, 00:18:32.503 "ns_data": { 00:18:32.503 "id": 1, 00:18:32.503 "can_share": true 00:18:32.503 } 00:18:32.503 } 00:18:32.503 ], 00:18:32.503 "mp_policy": "active_passive" 00:18:32.503 } 00:18:32.503 } 00:18:32.503 ] 00:18:32.503 10:15:05 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3433363 00:18:32.503 10:15:05 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:32.503 10:15:05 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:32.760 Running I/O for 10 seconds... 00:18:33.691 Latency(us) 00:18:33.691 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:33.691 Nvme0n1 : 1.00 15463.00 60.40 0.00 0.00 0.00 0.00 0.00 00:18:33.691 =================================================================================================================== 00:18:33.691 Total : 15463.00 60.40 0.00 0.00 0.00 0.00 0.00 00:18:33.691 00:18:34.624 10:15:07 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 452f123f-f233-47c6-91c0-8faa878ec9b5 00:18:34.624 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:34.624 Nvme0n1 : 2.00 15644.00 61.11 0.00 0.00 0.00 0.00 0.00 00:18:34.624 =================================================================================================================== 00:18:34.624 Total : 15644.00 61.11 0.00 0.00 0.00 0.00 0.00 00:18:34.624 00:18:34.881 true 00:18:34.881 10:15:07 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 452f123f-f233-47c6-91c0-8faa878ec9b5 00:18:34.881 10:15:07 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:34.881 10:15:08 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:34.881 10:15:08 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:34.881 10:15:08 -- target/nvmf_lvs_grow.sh@65 -- # wait 3433363 00:18:35.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:35.813 Nvme0n1 : 3.00 15668.33 61.20 0.00 0.00 0.00 0.00 0.00 00:18:35.813 =================================================================================================================== 00:18:35.813 Total : 15668.33 61.20 0.00 0.00 0.00 0.00 0.00 00:18:35.813 00:18:36.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:36.745 Nvme0n1 : 4.00 15695.25 61.31 0.00 0.00 0.00 0.00 0.00 00:18:36.745 =================================================================================================================== 00:18:36.745 Total : 15695.25 61.31 0.00 0.00 0.00 0.00 0.00 00:18:36.745 00:18:37.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:37.677 Nvme0n1 : 5.00 15709.40 61.36 0.00 0.00 0.00 0.00 0.00 00:18:37.677 =================================================================================================================== 00:18:37.677 Total : 15709.40 61.36 0.00 0.00 0.00 0.00 0.00 00:18:37.677 00:18:38.608 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:38.608 Nvme0n1 : 6.00 15740.50 61.49 0.00 0.00 0.00 0.00 0.00 00:18:38.608 =================================================================================================================== 00:18:38.608 Total : 15740.50 61.49 0.00 0.00 0.00 0.00 0.00 00:18:38.608 00:18:39.540 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:39.540 Nvme0n1 : 7.00 15748.57 61.52 0.00 0.00 0.00 0.00 0.00 00:18:39.540 =================================================================================================================== 00:18:39.540 Total : 15748.57 61.52 0.00 0.00 0.00 0.00 0.00 00:18:39.540 00:18:40.911 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:40.911 Nvme0n1 : 8.00 15769.88 61.60 0.00 0.00 0.00 0.00 0.00 00:18:40.911 =================================================================================================================== 00:18:40.911 Total : 15769.88 61.60 0.00 0.00 0.00 0.00 0.00 00:18:40.911 00:18:41.840 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:41.840 Nvme0n1 : 9.00 15784.11 61.66 0.00 0.00 0.00 0.00 0.00 00:18:41.840 =================================================================================================================== 00:18:41.840 Total : 15784.11 61.66 0.00 0.00 0.00 0.00 0.00 00:18:41.840 00:18:42.771 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:42.771 Nvme0n1 : 10.00 15792.60 61.69 0.00 0.00 0.00 0.00 0.00 00:18:42.771 =================================================================================================================== 00:18:42.771 Total : 15792.60 61.69 0.00 0.00 0.00 0.00 0.00 00:18:42.771 00:18:42.771 00:18:42.771 Latency(us) 00:18:42.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.771 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:42.771 Nvme0n1 : 10.01 15798.17 61.71 0.00 0.00 8095.53 4527.94 16562.73 00:18:42.771 =================================================================================================================== 00:18:42.771 Total : 15798.17 61.71 0.00 0.00 8095.53 4527.94 16562.73 00:18:42.771 0 00:18:42.771 10:15:15 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3433060 00:18:42.771 10:15:15 -- common/autotest_common.sh@926 -- # '[' -z 3433060 ']' 00:18:42.771 10:15:15 -- common/autotest_common.sh@930 -- # kill -0 3433060 00:18:42.771 10:15:15 -- common/autotest_common.sh@931 -- # uname 00:18:42.771 10:15:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:42.771 10:15:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3433060 00:18:42.771 10:15:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:42.771 10:15:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:42.771 10:15:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3433060' 00:18:42.771 killing process with pid 3433060 00:18:42.771 10:15:15 -- common/autotest_common.sh@945 -- # kill 3433060 00:18:42.771 Received shutdown signal, test time was about 10.000000 seconds 00:18:42.771 00:18:42.771 Latency(us) 00:18:42.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.771 =================================================================================================================== 00:18:42.771 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:42.771 10:15:15 -- common/autotest_common.sh@950 -- # wait 3433060 00:18:43.028 10:15:16 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:43.285 10:15:16 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 452f123f-f233-47c6-91c0-8faa878ec9b5 00:18:43.285 10:15:16 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:43.542 10:15:16 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:43.542 10:15:16 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:18:43.542 10:15:16 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:43.798 [2024-04-17 10:15:16.878701] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:43.798 10:15:16 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 452f123f-f233-47c6-91c0-8faa878ec9b5 00:18:43.798 10:15:16 -- common/autotest_common.sh@640 -- # local es=0 00:18:43.798 10:15:16 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 452f123f-f233-47c6-91c0-8faa878ec9b5 00:18:43.798 10:15:16 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:43.798 10:15:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:43.798 10:15:16 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:43.798 10:15:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:43.799 10:15:16 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:43.799 10:15:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:43.799 10:15:16 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:43.799 10:15:16 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:43.799 10:15:16 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 452f123f-f233-47c6-91c0-8faa878ec9b5 00:18:44.056 request: 00:18:44.056 { 00:18:44.056 "uuid": "452f123f-f233-47c6-91c0-8faa878ec9b5", 00:18:44.056 "method": "bdev_lvol_get_lvstores", 00:18:44.056 "req_id": 1 00:18:44.056 } 00:18:44.056 Got JSON-RPC error response 00:18:44.056 response: 00:18:44.056 { 00:18:44.056 "code": -19, 00:18:44.056 "message": "No such device" 00:18:44.056 } 00:18:44.056 10:15:17 -- common/autotest_common.sh@643 -- # es=1 00:18:44.056 10:15:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:44.056 10:15:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:44.056 10:15:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:44.056 10:15:17 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:44.313 aio_bdev 00:18:44.313 10:15:17 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 6e92767b-98a7-4eab-b20b-6c626056ac83 00:18:44.313 10:15:17 -- common/autotest_common.sh@887 -- # local bdev_name=6e92767b-98a7-4eab-b20b-6c626056ac83 00:18:44.313 10:15:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:44.313 10:15:17 -- common/autotest_common.sh@889 -- # local i 00:18:44.314 10:15:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:44.314 10:15:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:44.314 10:15:17 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:44.571 10:15:17 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6e92767b-98a7-4eab-b20b-6c626056ac83 -t 2000 00:18:44.571 [ 00:18:44.571 { 00:18:44.571 "name": "6e92767b-98a7-4eab-b20b-6c626056ac83", 00:18:44.571 "aliases": [ 00:18:44.571 "lvs/lvol" 00:18:44.571 ], 00:18:44.571 "product_name": "Logical Volume", 00:18:44.571 "block_size": 4096, 00:18:44.571 "num_blocks": 38912, 00:18:44.571 "uuid": "6e92767b-98a7-4eab-b20b-6c626056ac83", 00:18:44.571 "assigned_rate_limits": { 00:18:44.571 "rw_ios_per_sec": 0, 00:18:44.571 "rw_mbytes_per_sec": 0, 00:18:44.571 "r_mbytes_per_sec": 0, 00:18:44.571 "w_mbytes_per_sec": 0 00:18:44.571 }, 00:18:44.571 "claimed": false, 00:18:44.571 "zoned": false, 00:18:44.571 "supported_io_types": { 00:18:44.571 "read": true, 00:18:44.571 "write": true, 00:18:44.571 "unmap": true, 00:18:44.571 "write_zeroes": true, 00:18:44.571 "flush": false, 00:18:44.571 "reset": true, 00:18:44.571 "compare": false, 00:18:44.571 "compare_and_write": false, 00:18:44.571 "abort": false, 00:18:44.571 "nvme_admin": false, 00:18:44.571 "nvme_io": false 00:18:44.571 }, 00:18:44.571 "driver_specific": { 00:18:44.571 "lvol": { 00:18:44.571 "lvol_store_uuid": "452f123f-f233-47c6-91c0-8faa878ec9b5", 00:18:44.571 "base_bdev": "aio_bdev", 00:18:44.571 "thin_provision": false, 00:18:44.571 "snapshot": false, 00:18:44.571 "clone": false, 00:18:44.571 "esnap_clone": false 00:18:44.571 } 00:18:44.571 } 00:18:44.571 } 00:18:44.571 ] 00:18:44.571 10:15:17 -- common/autotest_common.sh@895 -- # return 0 00:18:44.571 10:15:17 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 452f123f-f233-47c6-91c0-8faa878ec9b5 00:18:44.571 10:15:17 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:44.828 10:15:18 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:44.828 10:15:18 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 452f123f-f233-47c6-91c0-8faa878ec9b5 00:18:44.828 10:15:18 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:45.086 10:15:18 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:45.086 10:15:18 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6e92767b-98a7-4eab-b20b-6c626056ac83 00:18:45.343 10:15:18 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 452f123f-f233-47c6-91c0-8faa878ec9b5 00:18:45.601 10:15:18 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:45.858 10:15:19 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:45.858 00:18:45.858 real 0m17.528s 00:18:45.858 user 0m17.411s 00:18:45.858 sys 0m1.529s 00:18:45.858 10:15:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:45.858 10:15:19 -- common/autotest_common.sh@10 -- # set +x 00:18:45.858 ************************************ 00:18:45.858 END TEST lvs_grow_clean 00:18:45.858 ************************************ 00:18:45.858 10:15:19 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:45.858 10:15:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:45.858 10:15:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:45.858 10:15:19 -- common/autotest_common.sh@10 -- # set +x 00:18:45.858 ************************************ 00:18:45.858 START TEST lvs_grow_dirty 00:18:45.858 ************************************ 00:18:45.858 10:15:19 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:18:45.858 10:15:19 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:45.858 10:15:19 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:45.858 10:15:19 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:45.858 10:15:19 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:45.858 10:15:19 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:45.858 10:15:19 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:45.858 10:15:19 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:45.858 10:15:19 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:45.858 10:15:19 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:46.116 10:15:19 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:46.116 10:15:19 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:46.373 10:15:19 -- target/nvmf_lvs_grow.sh@28 -- # lvs=a5297077-015d-4fe1-bf4d-52777ed47f7a 00:18:46.373 10:15:19 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5297077-015d-4fe1-bf4d-52777ed47f7a 00:18:46.373 10:15:19 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:46.630 10:15:19 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:46.630 10:15:19 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:46.630 10:15:19 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a5297077-015d-4fe1-bf4d-52777ed47f7a lvol 150 00:18:46.887 10:15:20 -- target/nvmf_lvs_grow.sh@33 -- # lvol=7c928959-b030-4863-8965-38dbf9787d5d 00:18:46.887 10:15:20 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:46.887 10:15:20 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:47.145 [2024-04-17 10:15:20.366279] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:47.145 [2024-04-17 10:15:20.366346] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:47.145 true 00:18:47.145 10:15:20 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5297077-015d-4fe1-bf4d-52777ed47f7a 00:18:47.145 10:15:20 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:47.402 10:15:20 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:47.402 10:15:20 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:47.659 10:15:20 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7c928959-b030-4863-8965-38dbf9787d5d 00:18:47.917 10:15:21 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:48.174 10:15:21 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:48.431 10:15:21 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3436637 00:18:48.431 10:15:21 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:48.431 10:15:21 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3436637 /var/tmp/bdevperf.sock 00:18:48.431 10:15:21 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:48.431 10:15:21 -- common/autotest_common.sh@819 -- # '[' -z 3436637 ']' 00:18:48.431 10:15:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:48.431 10:15:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:48.431 10:15:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:48.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:48.431 10:15:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:48.431 10:15:21 -- common/autotest_common.sh@10 -- # set +x 00:18:48.431 [2024-04-17 10:15:21.580418] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:48.432 [2024-04-17 10:15:21.580479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3436637 ] 00:18:48.432 EAL: No free 2048 kB hugepages reported on node 1 00:18:48.432 [2024-04-17 10:15:21.653661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.432 [2024-04-17 10:15:21.740870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.362 10:15:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:49.362 10:15:22 -- common/autotest_common.sh@852 -- # return 0 00:18:49.362 10:15:22 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:49.619 Nvme0n1 00:18:49.619 10:15:22 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:49.877 [ 00:18:49.877 { 00:18:49.877 "name": "Nvme0n1", 00:18:49.877 "aliases": [ 00:18:49.877 "7c928959-b030-4863-8965-38dbf9787d5d" 00:18:49.877 ], 00:18:49.877 "product_name": "NVMe disk", 00:18:49.877 "block_size": 4096, 00:18:49.877 "num_blocks": 38912, 00:18:49.877 "uuid": "7c928959-b030-4863-8965-38dbf9787d5d", 00:18:49.877 "assigned_rate_limits": { 00:18:49.877 "rw_ios_per_sec": 0, 00:18:49.877 "rw_mbytes_per_sec": 0, 00:18:49.877 "r_mbytes_per_sec": 0, 00:18:49.877 "w_mbytes_per_sec": 0 00:18:49.877 }, 00:18:49.877 "claimed": false, 00:18:49.877 "zoned": false, 00:18:49.877 "supported_io_types": { 00:18:49.877 "read": true, 00:18:49.877 "write": true, 00:18:49.877 "unmap": true, 00:18:49.877 "write_zeroes": true, 00:18:49.877 "flush": true, 00:18:49.877 "reset": true, 00:18:49.877 "compare": true, 00:18:49.877 "compare_and_write": true, 00:18:49.877 "abort": true, 00:18:49.877 "nvme_admin": true, 00:18:49.877 "nvme_io": true 00:18:49.877 }, 00:18:49.877 "driver_specific": { 00:18:49.877 "nvme": [ 00:18:49.877 { 00:18:49.877 "trid": { 00:18:49.877 "trtype": "TCP", 00:18:49.877 "adrfam": "IPv4", 00:18:49.877 "traddr": "10.0.0.2", 00:18:49.877 "trsvcid": "4420", 00:18:49.877 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:49.877 }, 00:18:49.877 "ctrlr_data": { 00:18:49.877 "cntlid": 1, 00:18:49.877 "vendor_id": "0x8086", 00:18:49.877 "model_number": "SPDK bdev Controller", 00:18:49.877 "serial_number": "SPDK0", 00:18:49.877 "firmware_revision": "24.01.1", 00:18:49.877 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:49.877 "oacs": { 00:18:49.877 "security": 0, 00:18:49.877 "format": 0, 00:18:49.877 "firmware": 0, 00:18:49.877 "ns_manage": 0 00:18:49.877 }, 00:18:49.877 "multi_ctrlr": true, 00:18:49.877 "ana_reporting": false 00:18:49.877 }, 00:18:49.877 "vs": { 00:18:49.877 "nvme_version": "1.3" 00:18:49.877 }, 00:18:49.877 "ns_data": { 00:18:49.877 "id": 1, 00:18:49.877 "can_share": true 00:18:49.877 } 00:18:49.877 } 00:18:49.877 ], 00:18:49.877 "mp_policy": "active_passive" 00:18:49.877 } 00:18:49.877 } 00:18:49.877 ] 00:18:49.877 10:15:23 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3436970 00:18:49.877 10:15:23 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:49.877 10:15:23 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:49.877 Running I/O for 10 seconds... 00:18:51.246 Latency(us) 00:18:51.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.246 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:51.246 Nvme0n1 : 1.00 15567.00 60.81 0.00 0.00 0.00 0.00 0.00 00:18:51.246 =================================================================================================================== 00:18:51.246 Total : 15567.00 60.81 0.00 0.00 0.00 0.00 0.00 00:18:51.246 00:18:51.810 10:15:25 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a5297077-015d-4fe1-bf4d-52777ed47f7a 00:18:52.068 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:52.068 Nvme0n1 : 2.00 15627.00 61.04 0.00 0.00 0.00 0.00 0.00 00:18:52.068 =================================================================================================================== 00:18:52.068 Total : 15627.00 61.04 0.00 0.00 0.00 0.00 0.00 00:18:52.068 00:18:52.068 true 00:18:52.068 10:15:25 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5297077-015d-4fe1-bf4d-52777ed47f7a 00:18:52.068 10:15:25 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:52.325 10:15:25 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:52.325 10:15:25 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:52.325 10:15:25 -- target/nvmf_lvs_grow.sh@65 -- # wait 3436970 00:18:52.889 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:52.889 Nvme0n1 : 3.00 15691.00 61.29 0.00 0.00 0.00 0.00 0.00 00:18:52.889 =================================================================================================================== 00:18:52.889 Total : 15691.00 61.29 0.00 0.00 0.00 0.00 0.00 00:18:52.889 00:18:54.259 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:54.259 Nvme0n1 : 4.00 15717.50 61.40 0.00 0.00 0.00 0.00 0.00 00:18:54.259 =================================================================================================================== 00:18:54.259 Total : 15717.50 61.40 0.00 0.00 0.00 0.00 0.00 00:18:54.259 00:18:55.191 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:55.192 Nvme0n1 : 5.00 15751.00 61.53 0.00 0.00 0.00 0.00 0.00 00:18:55.192 =================================================================================================================== 00:18:55.192 Total : 15751.00 61.53 0.00 0.00 0.00 0.00 0.00 00:18:55.192 00:18:56.123 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:56.123 Nvme0n1 : 6.00 15764.00 61.58 0.00 0.00 0.00 0.00 0.00 00:18:56.123 =================================================================================================================== 00:18:56.123 Total : 15764.00 61.58 0.00 0.00 0.00 0.00 0.00 00:18:56.123 00:18:57.054 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:57.054 Nvme0n1 : 7.00 15769.29 61.60 0.00 0.00 0.00 0.00 0.00 00:18:57.054 =================================================================================================================== 00:18:57.054 Total : 15769.29 61.60 0.00 0.00 0.00 0.00 0.00 00:18:57.054 00:18:57.984 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:57.984 Nvme0n1 : 8.00 15784.50 61.66 0.00 0.00 0.00 0.00 0.00 00:18:57.984 =================================================================================================================== 00:18:57.984 Total : 15784.50 61.66 0.00 0.00 0.00 0.00 0.00 00:18:57.984 00:18:58.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:58.914 Nvme0n1 : 9.00 15789.89 61.68 0.00 0.00 0.00 0.00 0.00 00:18:58.914 =================================================================================================================== 00:18:58.914 Total : 15789.89 61.68 0.00 0.00 0.00 0.00 0.00 00:18:58.914 00:19:00.284 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:00.284 Nvme0n1 : 10.00 15801.20 61.72 0.00 0.00 0.00 0.00 0.00 00:19:00.284 =================================================================================================================== 00:19:00.284 Total : 15801.20 61.72 0.00 0.00 0.00 0.00 0.00 00:19:00.284 00:19:00.284 00:19:00.284 Latency(us) 00:19:00.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.284 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:00.284 Nvme0n1 : 10.01 15802.22 61.73 0.00 0.00 8094.05 2174.60 16681.89 00:19:00.284 =================================================================================================================== 00:19:00.284 Total : 15802.22 61.73 0.00 0.00 8094.05 2174.60 16681.89 00:19:00.284 0 00:19:00.284 10:15:33 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3436637 00:19:00.284 10:15:33 -- common/autotest_common.sh@926 -- # '[' -z 3436637 ']' 00:19:00.284 10:15:33 -- common/autotest_common.sh@930 -- # kill -0 3436637 00:19:00.285 10:15:33 -- common/autotest_common.sh@931 -- # uname 00:19:00.285 10:15:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:00.285 10:15:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3436637 00:19:00.285 10:15:33 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:00.285 10:15:33 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:00.285 10:15:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3436637' 00:19:00.285 killing process with pid 3436637 00:19:00.285 10:15:33 -- common/autotest_common.sh@945 -- # kill 3436637 00:19:00.285 Received shutdown signal, test time was about 10.000000 seconds 00:19:00.285 00:19:00.285 Latency(us) 00:19:00.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.285 =================================================================================================================== 00:19:00.285 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:00.285 10:15:33 -- common/autotest_common.sh@950 -- # wait 3436637 00:19:00.285 10:15:33 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:00.543 10:15:33 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5297077-015d-4fe1-bf4d-52777ed47f7a 00:19:00.543 10:15:33 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:19:00.801 10:15:34 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:19:00.801 10:15:34 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:19:00.801 10:15:34 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 3432150 00:19:00.801 10:15:34 -- target/nvmf_lvs_grow.sh@74 -- # wait 3432150 00:19:00.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 3432150 Killed "${NVMF_APP[@]}" "$@" 00:19:00.801 10:15:34 -- target/nvmf_lvs_grow.sh@74 -- # true 00:19:00.801 10:15:34 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:19:00.801 10:15:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:00.801 10:15:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:00.801 10:15:34 -- common/autotest_common.sh@10 -- # set +x 00:19:00.801 10:15:34 -- nvmf/common.sh@469 -- # nvmfpid=3438867 00:19:00.801 10:15:34 -- nvmf/common.sh@470 -- # waitforlisten 3438867 00:19:00.801 10:15:34 -- common/autotest_common.sh@819 -- # '[' -z 3438867 ']' 00:19:00.801 10:15:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.801 10:15:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:00.801 10:15:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.801 10:15:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:00.801 10:15:34 -- common/autotest_common.sh@10 -- # set +x 00:19:00.801 10:15:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:00.801 [2024-04-17 10:15:34.111224] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:00.801 [2024-04-17 10:15:34.111280] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.059 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.059 [2024-04-17 10:15:34.195634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.059 [2024-04-17 10:15:34.281874] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:01.059 [2024-04-17 10:15:34.282014] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.059 [2024-04-17 10:15:34.282025] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.059 [2024-04-17 10:15:34.282035] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.059 [2024-04-17 10:15:34.282055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.991 10:15:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:01.991 10:15:35 -- common/autotest_common.sh@852 -- # return 0 00:19:01.991 10:15:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:01.991 10:15:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:01.991 10:15:35 -- common/autotest_common.sh@10 -- # set +x 00:19:01.991 10:15:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.991 10:15:35 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:01.991 [2024-04-17 10:15:35.289804] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:19:01.991 [2024-04-17 10:15:35.289909] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:19:01.991 [2024-04-17 10:15:35.289946] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:19:01.991 10:15:35 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:19:01.991 10:15:35 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 7c928959-b030-4863-8965-38dbf9787d5d 00:19:01.991 10:15:35 -- common/autotest_common.sh@887 -- # local bdev_name=7c928959-b030-4863-8965-38dbf9787d5d 00:19:01.991 10:15:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:01.991 10:15:35 -- common/autotest_common.sh@889 -- # local i 00:19:01.991 10:15:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:01.991 10:15:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:01.991 10:15:35 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:02.265 10:15:35 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7c928959-b030-4863-8965-38dbf9787d5d -t 2000 00:19:02.565 [ 00:19:02.565 { 00:19:02.565 "name": "7c928959-b030-4863-8965-38dbf9787d5d", 00:19:02.565 "aliases": [ 00:19:02.565 "lvs/lvol" 00:19:02.565 ], 00:19:02.565 "product_name": "Logical Volume", 00:19:02.565 "block_size": 4096, 00:19:02.565 "num_blocks": 38912, 00:19:02.565 "uuid": "7c928959-b030-4863-8965-38dbf9787d5d", 00:19:02.565 "assigned_rate_limits": { 00:19:02.565 "rw_ios_per_sec": 0, 00:19:02.566 "rw_mbytes_per_sec": 0, 00:19:02.566 "r_mbytes_per_sec": 0, 00:19:02.566 "w_mbytes_per_sec": 0 00:19:02.566 }, 00:19:02.566 "claimed": false, 00:19:02.566 "zoned": false, 00:19:02.566 "supported_io_types": { 00:19:02.566 "read": true, 00:19:02.566 "write": true, 00:19:02.566 "unmap": true, 00:19:02.566 "write_zeroes": true, 00:19:02.566 "flush": false, 00:19:02.566 "reset": true, 00:19:02.566 "compare": false, 00:19:02.566 "compare_and_write": false, 00:19:02.566 "abort": false, 00:19:02.566 "nvme_admin": false, 00:19:02.566 "nvme_io": false 00:19:02.566 }, 00:19:02.566 "driver_specific": { 00:19:02.566 "lvol": { 00:19:02.566 "lvol_store_uuid": "a5297077-015d-4fe1-bf4d-52777ed47f7a", 00:19:02.566 "base_bdev": "aio_bdev", 00:19:02.566 "thin_provision": false, 00:19:02.566 "snapshot": false, 00:19:02.566 "clone": false, 00:19:02.566 "esnap_clone": false 00:19:02.566 } 00:19:02.566 } 00:19:02.566 } 00:19:02.566 ] 00:19:02.566 10:15:35 -- common/autotest_common.sh@895 -- # return 0 00:19:02.566 10:15:35 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5297077-015d-4fe1-bf4d-52777ed47f7a 00:19:02.566 10:15:35 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:19:02.823 10:15:36 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:19:02.823 10:15:36 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5297077-015d-4fe1-bf4d-52777ed47f7a 00:19:02.823 10:15:36 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:19:03.080 10:15:36 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:19:03.080 10:15:36 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:03.338 [2024-04-17 10:15:36.466669] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:19:03.338 10:15:36 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5297077-015d-4fe1-bf4d-52777ed47f7a 00:19:03.338 10:15:36 -- common/autotest_common.sh@640 -- # local es=0 00:19:03.338 10:15:36 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5297077-015d-4fe1-bf4d-52777ed47f7a 00:19:03.338 10:15:36 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:03.338 10:15:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:03.338 10:15:36 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:03.338 10:15:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:03.338 10:15:36 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:03.338 10:15:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:03.338 10:15:36 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:03.338 10:15:36 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:03.338 10:15:36 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5297077-015d-4fe1-bf4d-52777ed47f7a 00:19:03.595 request: 00:19:03.595 { 00:19:03.595 "uuid": "a5297077-015d-4fe1-bf4d-52777ed47f7a", 00:19:03.595 "method": "bdev_lvol_get_lvstores", 00:19:03.595 "req_id": 1 00:19:03.595 } 00:19:03.595 Got JSON-RPC error response 00:19:03.595 response: 00:19:03.595 { 00:19:03.595 "code": -19, 00:19:03.595 "message": "No such device" 00:19:03.595 } 00:19:03.595 10:15:36 -- common/autotest_common.sh@643 -- # es=1 00:19:03.595 10:15:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:03.595 10:15:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:03.595 10:15:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:03.596 10:15:36 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:03.854 aio_bdev 00:19:03.854 10:15:36 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 7c928959-b030-4863-8965-38dbf9787d5d 00:19:03.854 10:15:36 -- common/autotest_common.sh@887 -- # local bdev_name=7c928959-b030-4863-8965-38dbf9787d5d 00:19:03.854 10:15:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:03.854 10:15:36 -- common/autotest_common.sh@889 -- # local i 00:19:03.854 10:15:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:03.854 10:15:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:03.854 10:15:36 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:04.112 10:15:37 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7c928959-b030-4863-8965-38dbf9787d5d -t 2000 00:19:04.369 [ 00:19:04.369 { 00:19:04.369 "name": "7c928959-b030-4863-8965-38dbf9787d5d", 00:19:04.369 "aliases": [ 00:19:04.369 "lvs/lvol" 00:19:04.369 ], 00:19:04.369 "product_name": "Logical Volume", 00:19:04.369 "block_size": 4096, 00:19:04.369 "num_blocks": 38912, 00:19:04.369 "uuid": "7c928959-b030-4863-8965-38dbf9787d5d", 00:19:04.369 "assigned_rate_limits": { 00:19:04.369 "rw_ios_per_sec": 0, 00:19:04.369 "rw_mbytes_per_sec": 0, 00:19:04.369 "r_mbytes_per_sec": 0, 00:19:04.369 "w_mbytes_per_sec": 0 00:19:04.369 }, 00:19:04.369 "claimed": false, 00:19:04.369 "zoned": false, 00:19:04.369 "supported_io_types": { 00:19:04.369 "read": true, 00:19:04.369 "write": true, 00:19:04.369 "unmap": true, 00:19:04.369 "write_zeroes": true, 00:19:04.369 "flush": false, 00:19:04.369 "reset": true, 00:19:04.369 "compare": false, 00:19:04.369 "compare_and_write": false, 00:19:04.369 "abort": false, 00:19:04.369 "nvme_admin": false, 00:19:04.369 "nvme_io": false 00:19:04.369 }, 00:19:04.369 "driver_specific": { 00:19:04.369 "lvol": { 00:19:04.369 "lvol_store_uuid": "a5297077-015d-4fe1-bf4d-52777ed47f7a", 00:19:04.369 "base_bdev": "aio_bdev", 00:19:04.369 "thin_provision": false, 00:19:04.369 "snapshot": false, 00:19:04.369 "clone": false, 00:19:04.369 "esnap_clone": false 00:19:04.369 } 00:19:04.369 } 00:19:04.369 } 00:19:04.369 ] 00:19:04.369 10:15:37 -- common/autotest_common.sh@895 -- # return 0 00:19:04.369 10:15:37 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5297077-015d-4fe1-bf4d-52777ed47f7a 00:19:04.369 10:15:37 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:19:04.626 10:15:37 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:19:04.626 10:15:37 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5297077-015d-4fe1-bf4d-52777ed47f7a 00:19:04.626 10:15:37 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:19:04.626 10:15:37 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:19:04.626 10:15:37 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7c928959-b030-4863-8965-38dbf9787d5d 00:19:04.883 10:15:38 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a5297077-015d-4fe1-bf4d-52777ed47f7a 00:19:05.141 10:15:38 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:05.398 10:15:38 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:05.398 00:19:05.398 real 0m19.536s 00:19:05.398 user 0m50.429s 00:19:05.398 sys 0m3.554s 00:19:05.398 10:15:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:05.398 10:15:38 -- common/autotest_common.sh@10 -- # set +x 00:19:05.398 ************************************ 00:19:05.398 END TEST lvs_grow_dirty 00:19:05.398 ************************************ 00:19:05.656 10:15:38 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:19:05.656 10:15:38 -- common/autotest_common.sh@796 -- # type=--id 00:19:05.656 10:15:38 -- common/autotest_common.sh@797 -- # id=0 00:19:05.656 10:15:38 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:19:05.656 10:15:38 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:05.656 10:15:38 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:19:05.656 10:15:38 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:19:05.656 10:15:38 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:19:05.656 10:15:38 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:05.656 nvmf_trace.0 00:19:05.656 10:15:38 -- common/autotest_common.sh@811 -- # return 0 00:19:05.656 10:15:38 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:19:05.656 10:15:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:05.656 10:15:38 -- nvmf/common.sh@116 -- # sync 00:19:05.656 10:15:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:05.656 10:15:38 -- nvmf/common.sh@119 -- # set +e 00:19:05.656 10:15:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:05.656 10:15:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:05.656 rmmod nvme_tcp 00:19:05.656 rmmod nvme_fabrics 00:19:05.656 rmmod nvme_keyring 00:19:05.656 10:15:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:05.656 10:15:38 -- nvmf/common.sh@123 -- # set -e 00:19:05.656 10:15:38 -- nvmf/common.sh@124 -- # return 0 00:19:05.656 10:15:38 -- nvmf/common.sh@477 -- # '[' -n 3438867 ']' 00:19:05.656 10:15:38 -- nvmf/common.sh@478 -- # killprocess 3438867 00:19:05.656 10:15:38 -- common/autotest_common.sh@926 -- # '[' -z 3438867 ']' 00:19:05.656 10:15:38 -- common/autotest_common.sh@930 -- # kill -0 3438867 00:19:05.656 10:15:38 -- common/autotest_common.sh@931 -- # uname 00:19:05.656 10:15:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:05.656 10:15:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3438867 00:19:05.656 10:15:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:05.656 10:15:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:05.656 10:15:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3438867' 00:19:05.656 killing process with pid 3438867 00:19:05.656 10:15:38 -- common/autotest_common.sh@945 -- # kill 3438867 00:19:05.656 10:15:38 -- common/autotest_common.sh@950 -- # wait 3438867 00:19:05.913 10:15:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:05.913 10:15:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:05.913 10:15:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:05.913 10:15:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:05.913 10:15:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:05.913 10:15:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.913 10:15:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:05.913 10:15:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.455 10:15:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:08.455 00:19:08.455 real 0m46.745s 00:19:08.455 user 1m14.877s 00:19:08.455 sys 0m9.964s 00:19:08.455 10:15:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:08.455 10:15:41 -- common/autotest_common.sh@10 -- # set +x 00:19:08.455 ************************************ 00:19:08.455 END TEST nvmf_lvs_grow 00:19:08.455 ************************************ 00:19:08.455 10:15:41 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:19:08.455 10:15:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:08.455 10:15:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:08.455 10:15:41 -- common/autotest_common.sh@10 -- # set +x 00:19:08.455 ************************************ 00:19:08.455 START TEST nvmf_bdev_io_wait 00:19:08.455 ************************************ 00:19:08.455 10:15:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:19:08.455 * Looking for test storage... 00:19:08.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:08.455 10:15:41 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:08.455 10:15:41 -- nvmf/common.sh@7 -- # uname -s 00:19:08.455 10:15:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.455 10:15:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.455 10:15:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.455 10:15:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.455 10:15:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:08.455 10:15:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:08.455 10:15:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.455 10:15:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:08.455 10:15:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.455 10:15:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.455 10:15:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:08.455 10:15:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:19:08.455 10:15:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.455 10:15:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.455 10:15:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:08.455 10:15:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:08.455 10:15:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.455 10:15:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.455 10:15:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.455 10:15:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.455 10:15:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.455 10:15:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.455 10:15:41 -- paths/export.sh@5 -- # export PATH 00:19:08.455 10:15:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.455 10:15:41 -- nvmf/common.sh@46 -- # : 0 00:19:08.455 10:15:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:08.455 10:15:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:08.455 10:15:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:08.455 10:15:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.455 10:15:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.455 10:15:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:08.455 10:15:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:08.455 10:15:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:08.455 10:15:41 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:08.455 10:15:41 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:08.455 10:15:41 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:19:08.455 10:15:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:08.455 10:15:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:08.455 10:15:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:08.455 10:15:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:08.455 10:15:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:08.455 10:15:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.455 10:15:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.455 10:15:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.455 10:15:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:08.455 10:15:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:08.455 10:15:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:08.455 10:15:41 -- common/autotest_common.sh@10 -- # set +x 00:19:13.726 10:15:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:13.726 10:15:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:13.726 10:15:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:13.726 10:15:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:13.726 10:15:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:13.726 10:15:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:13.726 10:15:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:13.726 10:15:46 -- nvmf/common.sh@294 -- # net_devs=() 00:19:13.726 10:15:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:13.726 10:15:46 -- nvmf/common.sh@295 -- # e810=() 00:19:13.726 10:15:46 -- nvmf/common.sh@295 -- # local -ga e810 00:19:13.726 10:15:46 -- nvmf/common.sh@296 -- # x722=() 00:19:13.726 10:15:46 -- nvmf/common.sh@296 -- # local -ga x722 00:19:13.726 10:15:46 -- nvmf/common.sh@297 -- # mlx=() 00:19:13.726 10:15:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:13.726 10:15:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:13.726 10:15:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:13.726 10:15:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:13.726 10:15:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:13.726 10:15:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:13.726 10:15:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:13.726 10:15:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:13.726 10:15:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:13.726 10:15:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:13.726 10:15:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:13.726 10:15:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:13.726 10:15:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:13.726 10:15:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:13.726 10:15:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:13.726 10:15:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:13.726 10:15:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:13.726 10:15:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:13.726 10:15:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:13.726 10:15:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:13.726 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:13.726 10:15:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:13.726 10:15:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:13.726 10:15:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:13.726 10:15:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:13.726 10:15:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:13.726 10:15:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:13.726 10:15:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:13.726 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:13.726 10:15:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:13.726 10:15:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:13.726 10:15:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:13.726 10:15:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:13.726 10:15:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:13.726 10:15:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:13.726 10:15:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:13.726 10:15:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:13.726 10:15:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:13.726 10:15:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:13.726 10:15:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:13.726 10:15:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:13.726 10:15:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:13.726 Found net devices under 0000:af:00.0: cvl_0_0 00:19:13.726 10:15:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:13.726 10:15:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:13.726 10:15:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:13.726 10:15:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:13.726 10:15:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:13.726 10:15:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:13.726 Found net devices under 0000:af:00.1: cvl_0_1 00:19:13.726 10:15:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:13.726 10:15:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:13.726 10:15:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:13.726 10:15:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:13.726 10:15:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:13.726 10:15:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:13.726 10:15:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:13.726 10:15:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:13.726 10:15:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:13.726 10:15:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:13.726 10:15:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:13.726 10:15:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:13.726 10:15:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:13.726 10:15:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:13.726 10:15:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:13.726 10:15:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:13.726 10:15:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:13.726 10:15:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:13.726 10:15:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:13.726 10:15:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:13.726 10:15:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:13.726 10:15:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:13.726 10:15:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:13.726 10:15:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:13.726 10:15:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:13.726 10:15:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:13.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:13.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:19:13.726 00:19:13.726 --- 10.0.0.2 ping statistics --- 00:19:13.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.726 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:19:13.726 10:15:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:13.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:13.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:19:13.726 00:19:13.726 --- 10.0.0.1 ping statistics --- 00:19:13.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.726 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:19:13.726 10:15:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:13.726 10:15:46 -- nvmf/common.sh@410 -- # return 0 00:19:13.726 10:15:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:13.726 10:15:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:13.726 10:15:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:13.726 10:15:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:13.726 10:15:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:13.726 10:15:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:13.726 10:15:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:13.726 10:15:47 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:13.726 10:15:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:13.726 10:15:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:13.726 10:15:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.726 10:15:47 -- nvmf/common.sh@469 -- # nvmfpid=3443463 00:19:13.726 10:15:47 -- nvmf/common.sh@470 -- # waitforlisten 3443463 00:19:13.726 10:15:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:13.726 10:15:47 -- common/autotest_common.sh@819 -- # '[' -z 3443463 ']' 00:19:13.726 10:15:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.726 10:15:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:13.726 10:15:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.726 10:15:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:13.726 10:15:47 -- common/autotest_common.sh@10 -- # set +x 00:19:13.984 [2024-04-17 10:15:47.070233] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:13.984 [2024-04-17 10:15:47.070293] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.984 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.984 [2024-04-17 10:15:47.157027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:13.984 [2024-04-17 10:15:47.247606] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:13.984 [2024-04-17 10:15:47.247752] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:13.984 [2024-04-17 10:15:47.247765] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:13.984 [2024-04-17 10:15:47.247774] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:13.984 [2024-04-17 10:15:47.247822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.984 [2024-04-17 10:15:47.247923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.984 [2024-04-17 10:15:47.248051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:13.984 [2024-04-17 10:15:47.248053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.915 10:15:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:14.915 10:15:48 -- common/autotest_common.sh@852 -- # return 0 00:19:14.915 10:15:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:14.915 10:15:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:14.915 10:15:48 -- common/autotest_common.sh@10 -- # set +x 00:19:14.915 10:15:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.915 10:15:48 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:19:14.915 10:15:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:14.915 10:15:48 -- common/autotest_common.sh@10 -- # set +x 00:19:14.915 10:15:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:14.915 10:15:48 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:19:14.915 10:15:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:14.915 10:15:48 -- common/autotest_common.sh@10 -- # set +x 00:19:14.915 10:15:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:14.915 10:15:48 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:14.915 10:15:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:14.915 10:15:48 -- common/autotest_common.sh@10 -- # set +x 00:19:14.915 [2024-04-17 10:15:48.114309] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:14.915 10:15:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:14.915 10:15:48 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:14.915 10:15:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:14.915 10:15:48 -- common/autotest_common.sh@10 -- # set +x 00:19:14.915 Malloc0 00:19:14.915 10:15:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:14.915 10:15:48 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:14.915 10:15:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:14.915 10:15:48 -- common/autotest_common.sh@10 -- # set +x 00:19:14.915 10:15:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:14.915 10:15:48 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:14.915 10:15:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:14.915 10:15:48 -- common/autotest_common.sh@10 -- # set +x 00:19:14.915 10:15:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:14.915 10:15:48 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:14.915 10:15:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:14.915 10:15:48 -- common/autotest_common.sh@10 -- # set +x 00:19:14.915 [2024-04-17 10:15:48.185583] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:14.915 10:15:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:14.915 10:15:48 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3443737 00:19:14.915 10:15:48 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:19:14.915 10:15:48 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:19:14.915 10:15:48 -- target/bdev_io_wait.sh@30 -- # READ_PID=3443740 00:19:14.915 10:15:48 -- nvmf/common.sh@520 -- # config=() 00:19:14.915 10:15:48 -- nvmf/common.sh@520 -- # local subsystem config 00:19:14.915 10:15:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:14.915 10:15:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:14.915 { 00:19:14.915 "params": { 00:19:14.915 "name": "Nvme$subsystem", 00:19:14.915 "trtype": "$TEST_TRANSPORT", 00:19:14.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.915 "adrfam": "ipv4", 00:19:14.915 "trsvcid": "$NVMF_PORT", 00:19:14.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.915 "hdgst": ${hdgst:-false}, 00:19:14.915 "ddgst": ${ddgst:-false} 00:19:14.915 }, 00:19:14.915 "method": "bdev_nvme_attach_controller" 00:19:14.915 } 00:19:14.915 EOF 00:19:14.915 )") 00:19:14.915 10:15:48 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:19:14.915 10:15:48 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3443743 00:19:14.915 10:15:48 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:19:14.915 10:15:48 -- nvmf/common.sh@520 -- # config=() 00:19:14.915 10:15:48 -- nvmf/common.sh@520 -- # local subsystem config 00:19:14.915 10:15:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:14.915 10:15:48 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:19:14.915 10:15:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:14.915 { 00:19:14.915 "params": { 00:19:14.915 "name": "Nvme$subsystem", 00:19:14.915 "trtype": "$TEST_TRANSPORT", 00:19:14.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.915 "adrfam": "ipv4", 00:19:14.915 "trsvcid": "$NVMF_PORT", 00:19:14.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.915 "hdgst": ${hdgst:-false}, 00:19:14.915 "ddgst": ${ddgst:-false} 00:19:14.915 }, 00:19:14.915 "method": "bdev_nvme_attach_controller" 00:19:14.915 } 00:19:14.915 EOF 00:19:14.915 )") 00:19:14.915 10:15:48 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:19:14.915 10:15:48 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3443747 00:19:14.915 10:15:48 -- nvmf/common.sh@542 -- # cat 00:19:14.915 10:15:48 -- target/bdev_io_wait.sh@35 -- # sync 00:19:14.915 10:15:48 -- nvmf/common.sh@520 -- # config=() 00:19:14.915 10:15:48 -- nvmf/common.sh@520 -- # local subsystem config 00:19:14.915 10:15:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:14.915 10:15:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:14.915 { 00:19:14.915 "params": { 00:19:14.915 "name": "Nvme$subsystem", 00:19:14.915 "trtype": "$TEST_TRANSPORT", 00:19:14.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.915 "adrfam": "ipv4", 00:19:14.915 "trsvcid": "$NVMF_PORT", 00:19:14.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.915 "hdgst": ${hdgst:-false}, 00:19:14.915 "ddgst": ${ddgst:-false} 00:19:14.915 }, 00:19:14.915 "method": "bdev_nvme_attach_controller" 00:19:14.915 } 00:19:14.915 EOF 00:19:14.915 )") 00:19:14.915 10:15:48 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:19:14.915 10:15:48 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:19:14.915 10:15:48 -- nvmf/common.sh@520 -- # config=() 00:19:14.915 10:15:48 -- nvmf/common.sh@542 -- # cat 00:19:14.915 10:15:48 -- nvmf/common.sh@520 -- # local subsystem config 00:19:14.915 10:15:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:14.916 10:15:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:14.916 { 00:19:14.916 "params": { 00:19:14.916 "name": "Nvme$subsystem", 00:19:14.916 "trtype": "$TEST_TRANSPORT", 00:19:14.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.916 "adrfam": "ipv4", 00:19:14.916 "trsvcid": "$NVMF_PORT", 00:19:14.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.916 "hdgst": ${hdgst:-false}, 00:19:14.916 "ddgst": ${ddgst:-false} 00:19:14.916 }, 00:19:14.916 "method": "bdev_nvme_attach_controller" 00:19:14.916 } 00:19:14.916 EOF 00:19:14.916 )") 00:19:14.916 10:15:48 -- nvmf/common.sh@542 -- # cat 00:19:14.916 10:15:48 -- target/bdev_io_wait.sh@37 -- # wait 3443737 00:19:14.916 10:15:48 -- nvmf/common.sh@544 -- # jq . 00:19:14.916 10:15:48 -- nvmf/common.sh@542 -- # cat 00:19:14.916 10:15:48 -- nvmf/common.sh@544 -- # jq . 00:19:14.916 10:15:48 -- nvmf/common.sh@545 -- # IFS=, 00:19:14.916 10:15:48 -- nvmf/common.sh@544 -- # jq . 00:19:14.916 10:15:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:14.916 "params": { 00:19:14.916 "name": "Nvme1", 00:19:14.916 "trtype": "tcp", 00:19:14.916 "traddr": "10.0.0.2", 00:19:14.916 "adrfam": "ipv4", 00:19:14.916 "trsvcid": "4420", 00:19:14.916 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.916 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.916 "hdgst": false, 00:19:14.916 "ddgst": false 00:19:14.916 }, 00:19:14.916 "method": "bdev_nvme_attach_controller" 00:19:14.916 }' 00:19:14.916 10:15:48 -- nvmf/common.sh@544 -- # jq . 00:19:14.916 10:15:48 -- nvmf/common.sh@545 -- # IFS=, 00:19:14.916 10:15:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:14.916 "params": { 00:19:14.916 "name": "Nvme1", 00:19:14.916 "trtype": "tcp", 00:19:14.916 "traddr": "10.0.0.2", 00:19:14.916 "adrfam": "ipv4", 00:19:14.916 "trsvcid": "4420", 00:19:14.916 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.916 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.916 "hdgst": false, 00:19:14.916 "ddgst": false 00:19:14.916 }, 00:19:14.916 "method": "bdev_nvme_attach_controller" 00:19:14.916 }' 00:19:14.916 10:15:48 -- nvmf/common.sh@545 -- # IFS=, 00:19:14.916 10:15:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:14.916 "params": { 00:19:14.916 "name": "Nvme1", 00:19:14.916 "trtype": "tcp", 00:19:14.916 "traddr": "10.0.0.2", 00:19:14.916 "adrfam": "ipv4", 00:19:14.916 "trsvcid": "4420", 00:19:14.916 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.916 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.916 "hdgst": false, 00:19:14.916 "ddgst": false 00:19:14.916 }, 00:19:14.916 "method": "bdev_nvme_attach_controller" 00:19:14.916 }' 00:19:14.916 10:15:48 -- nvmf/common.sh@545 -- # IFS=, 00:19:14.916 10:15:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:14.916 "params": { 00:19:14.916 "name": "Nvme1", 00:19:14.916 "trtype": "tcp", 00:19:14.916 "traddr": "10.0.0.2", 00:19:14.916 "adrfam": "ipv4", 00:19:14.916 "trsvcid": "4420", 00:19:14.916 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.916 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.916 "hdgst": false, 00:19:14.916 "ddgst": false 00:19:14.916 }, 00:19:14.916 "method": "bdev_nvme_attach_controller" 00:19:14.916 }' 00:19:14.916 [2024-04-17 10:15:48.234479] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:14.916 [2024-04-17 10:15:48.234536] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:19:14.916 [2024-04-17 10:15:48.235857] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:14.916 [2024-04-17 10:15:48.235918] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:14.916 [2024-04-17 10:15:48.235906] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:14.916 [2024-04-17 10:15:48.235958] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:19:14.916 [2024-04-17 10:15:48.240942] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:14.916 [2024-04-17 10:15:48.240995] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:19:15.172 EAL: No free 2048 kB hugepages reported on node 1 00:19:15.172 EAL: No free 2048 kB hugepages reported on node 1 00:19:15.172 [2024-04-17 10:15:48.438673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.172 EAL: No free 2048 kB hugepages reported on node 1 00:19:15.429 [2024-04-17 10:15:48.525374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:15.429 [2024-04-17 10:15:48.532110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.429 EAL: No free 2048 kB hugepages reported on node 1 00:19:15.429 [2024-04-17 10:15:48.618087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:15.429 [2024-04-17 10:15:48.629260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.429 [2024-04-17 10:15:48.690598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.429 [2024-04-17 10:15:48.729628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:15.687 [2024-04-17 10:15:48.776501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:19:15.687 Running I/O for 1 seconds... 00:19:15.687 Running I/O for 1 seconds... 00:19:15.687 Running I/O for 1 seconds... 00:19:15.687 Running I/O for 1 seconds... 00:19:16.618 00:19:16.618 Latency(us) 00:19:16.618 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.618 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:19:16.618 Nvme1n1 : 1.02 6730.48 26.29 0.00 0.00 18801.66 8757.99 34793.66 00:19:16.618 =================================================================================================================== 00:19:16.618 Total : 6730.48 26.29 0.00 0.00 18801.66 8757.99 34793.66 00:19:16.875 00:19:16.875 Latency(us) 00:19:16.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.875 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:19:16.875 Nvme1n1 : 1.00 167322.93 653.61 0.00 0.00 761.62 303.48 923.46 00:19:16.875 =================================================================================================================== 00:19:16.875 Total : 167322.93 653.61 0.00 0.00 761.62 303.48 923.46 00:19:16.875 00:19:16.875 Latency(us) 00:19:16.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.875 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:19:16.875 Nvme1n1 : 1.01 6365.25 24.86 0.00 0.00 20046.58 5540.77 39321.60 00:19:16.875 =================================================================================================================== 00:19:16.875 Total : 6365.25 24.86 0.00 0.00 20046.58 5540.77 39321.60 00:19:16.875 00:19:16.875 Latency(us) 00:19:16.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.875 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:19:16.875 Nvme1n1 : 1.01 8600.20 33.59 0.00 0.00 14830.18 6464.23 26571.87 00:19:16.875 =================================================================================================================== 00:19:16.875 Total : 8600.20 33.59 0.00 0.00 14830.18 6464.23 26571.87 00:19:17.132 10:15:50 -- target/bdev_io_wait.sh@38 -- # wait 3443740 00:19:17.132 10:15:50 -- target/bdev_io_wait.sh@39 -- # wait 3443743 00:19:17.132 10:15:50 -- target/bdev_io_wait.sh@40 -- # wait 3443747 00:19:17.132 10:15:50 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:17.132 10:15:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:17.132 10:15:50 -- common/autotest_common.sh@10 -- # set +x 00:19:17.132 10:15:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:17.132 10:15:50 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:19:17.132 10:15:50 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:19:17.132 10:15:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:17.132 10:15:50 -- nvmf/common.sh@116 -- # sync 00:19:17.132 10:15:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:17.132 10:15:50 -- nvmf/common.sh@119 -- # set +e 00:19:17.132 10:15:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:17.132 10:15:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:17.132 rmmod nvme_tcp 00:19:17.132 rmmod nvme_fabrics 00:19:17.132 rmmod nvme_keyring 00:19:17.132 10:15:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:17.132 10:15:50 -- nvmf/common.sh@123 -- # set -e 00:19:17.132 10:15:50 -- nvmf/common.sh@124 -- # return 0 00:19:17.132 10:15:50 -- nvmf/common.sh@477 -- # '[' -n 3443463 ']' 00:19:17.132 10:15:50 -- nvmf/common.sh@478 -- # killprocess 3443463 00:19:17.132 10:15:50 -- common/autotest_common.sh@926 -- # '[' -z 3443463 ']' 00:19:17.132 10:15:50 -- common/autotest_common.sh@930 -- # kill -0 3443463 00:19:17.132 10:15:50 -- common/autotest_common.sh@931 -- # uname 00:19:17.132 10:15:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:17.132 10:15:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3443463 00:19:17.390 10:15:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:17.390 10:15:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:17.390 10:15:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3443463' 00:19:17.390 killing process with pid 3443463 00:19:17.390 10:15:50 -- common/autotest_common.sh@945 -- # kill 3443463 00:19:17.390 10:15:50 -- common/autotest_common.sh@950 -- # wait 3443463 00:19:17.390 10:15:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:17.390 10:15:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:17.390 10:15:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:17.390 10:15:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:17.390 10:15:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:17.390 10:15:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.390 10:15:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:17.390 10:15:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.922 10:15:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:19.922 00:19:19.922 real 0m11.528s 00:19:19.922 user 0m21.125s 00:19:19.922 sys 0m6.051s 00:19:19.922 10:15:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:19.922 10:15:52 -- common/autotest_common.sh@10 -- # set +x 00:19:19.922 ************************************ 00:19:19.922 END TEST nvmf_bdev_io_wait 00:19:19.922 ************************************ 00:19:19.922 10:15:52 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:19:19.922 10:15:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:19.922 10:15:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:19.922 10:15:52 -- common/autotest_common.sh@10 -- # set +x 00:19:19.922 ************************************ 00:19:19.922 START TEST nvmf_queue_depth 00:19:19.922 ************************************ 00:19:19.922 10:15:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:19:19.922 * Looking for test storage... 00:19:19.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:19.922 10:15:52 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:19.922 10:15:52 -- nvmf/common.sh@7 -- # uname -s 00:19:19.922 10:15:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:19.922 10:15:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:19.922 10:15:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:19.922 10:15:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:19.922 10:15:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:19.922 10:15:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:19.922 10:15:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:19.922 10:15:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:19.922 10:15:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:19.922 10:15:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:19.922 10:15:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:19.922 10:15:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:19:19.922 10:15:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:19.922 10:15:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:19.922 10:15:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:19.922 10:15:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:19.922 10:15:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:19.922 10:15:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:19.922 10:15:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:19.923 10:15:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.923 10:15:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.923 10:15:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.923 10:15:52 -- paths/export.sh@5 -- # export PATH 00:19:19.923 10:15:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.923 10:15:52 -- nvmf/common.sh@46 -- # : 0 00:19:19.923 10:15:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:19.923 10:15:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:19.923 10:15:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:19.923 10:15:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:19.923 10:15:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:19.923 10:15:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:19.923 10:15:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:19.923 10:15:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:19.923 10:15:52 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:19:19.923 10:15:52 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:19:19.923 10:15:52 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:19.923 10:15:52 -- target/queue_depth.sh@19 -- # nvmftestinit 00:19:19.923 10:15:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:19.923 10:15:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:19.923 10:15:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:19.923 10:15:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:19.923 10:15:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:19.923 10:15:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.923 10:15:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:19.923 10:15:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.923 10:15:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:19.923 10:15:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:19.923 10:15:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:19.923 10:15:52 -- common/autotest_common.sh@10 -- # set +x 00:19:25.188 10:15:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:25.188 10:15:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:25.188 10:15:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:25.188 10:15:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:25.188 10:15:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:25.188 10:15:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:25.188 10:15:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:25.188 10:15:58 -- nvmf/common.sh@294 -- # net_devs=() 00:19:25.188 10:15:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:25.188 10:15:58 -- nvmf/common.sh@295 -- # e810=() 00:19:25.188 10:15:58 -- nvmf/common.sh@295 -- # local -ga e810 00:19:25.188 10:15:58 -- nvmf/common.sh@296 -- # x722=() 00:19:25.188 10:15:58 -- nvmf/common.sh@296 -- # local -ga x722 00:19:25.188 10:15:58 -- nvmf/common.sh@297 -- # mlx=() 00:19:25.188 10:15:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:25.188 10:15:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:25.188 10:15:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:25.188 10:15:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:25.188 10:15:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:25.188 10:15:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:25.188 10:15:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:25.188 10:15:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:25.188 10:15:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:25.188 10:15:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:25.188 10:15:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:25.188 10:15:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:25.188 10:15:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:25.188 10:15:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:25.188 10:15:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:25.188 10:15:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:25.188 10:15:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:25.188 10:15:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:25.188 10:15:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:25.188 10:15:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:25.188 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:25.188 10:15:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:25.188 10:15:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:25.188 10:15:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.188 10:15:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.188 10:15:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:25.188 10:15:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:25.188 10:15:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:25.188 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:25.188 10:15:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:25.188 10:15:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:25.188 10:15:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.188 10:15:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.188 10:15:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:25.188 10:15:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:25.188 10:15:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:25.188 10:15:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:25.188 10:15:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:25.188 10:15:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.188 10:15:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:25.188 10:15:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.188 10:15:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:25.188 Found net devices under 0000:af:00.0: cvl_0_0 00:19:25.188 10:15:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.188 10:15:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:25.188 10:15:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.188 10:15:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:25.188 10:15:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.188 10:15:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:25.188 Found net devices under 0000:af:00.1: cvl_0_1 00:19:25.188 10:15:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.188 10:15:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:25.188 10:15:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:25.188 10:15:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:25.188 10:15:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:25.188 10:15:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:25.188 10:15:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:25.188 10:15:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:25.188 10:15:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:25.188 10:15:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:25.188 10:15:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:25.188 10:15:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:25.188 10:15:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:25.188 10:15:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:25.188 10:15:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:25.188 10:15:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:25.188 10:15:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:25.188 10:15:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:25.188 10:15:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:25.188 10:15:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:25.188 10:15:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:25.188 10:15:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:25.188 10:15:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:25.445 10:15:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:25.446 10:15:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:25.446 10:15:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:25.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:25.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:19:25.446 00:19:25.446 --- 10.0.0.2 ping statistics --- 00:19:25.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.446 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:19:25.446 10:15:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:25.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:25.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:19:25.446 00:19:25.446 --- 10.0.0.1 ping statistics --- 00:19:25.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.446 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:19:25.446 10:15:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:25.446 10:15:58 -- nvmf/common.sh@410 -- # return 0 00:19:25.446 10:15:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:25.446 10:15:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:25.446 10:15:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:25.446 10:15:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:25.446 10:15:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:25.446 10:15:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:25.446 10:15:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:25.446 10:15:58 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:19:25.446 10:15:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:25.446 10:15:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:25.446 10:15:58 -- common/autotest_common.sh@10 -- # set +x 00:19:25.446 10:15:58 -- nvmf/common.sh@469 -- # nvmfpid=3447787 00:19:25.446 10:15:58 -- nvmf/common.sh@470 -- # waitforlisten 3447787 00:19:25.446 10:15:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:25.446 10:15:58 -- common/autotest_common.sh@819 -- # '[' -z 3447787 ']' 00:19:25.446 10:15:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.446 10:15:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:25.446 10:15:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.446 10:15:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:25.446 10:15:58 -- common/autotest_common.sh@10 -- # set +x 00:19:25.446 [2024-04-17 10:15:58.656976] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:25.446 [2024-04-17 10:15:58.657031] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.446 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.446 [2024-04-17 10:15:58.737289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.703 [2024-04-17 10:15:58.824227] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:25.703 [2024-04-17 10:15:58.824372] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.703 [2024-04-17 10:15:58.824383] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.703 [2024-04-17 10:15:58.824392] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.703 [2024-04-17 10:15:58.824413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.309 10:15:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:26.309 10:15:59 -- common/autotest_common.sh@852 -- # return 0 00:19:26.309 10:15:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:26.309 10:15:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:26.309 10:15:59 -- common/autotest_common.sh@10 -- # set +x 00:19:26.309 10:15:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.309 10:15:59 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:26.309 10:15:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:26.309 10:15:59 -- common/autotest_common.sh@10 -- # set +x 00:19:26.309 [2024-04-17 10:15:59.533631] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.309 10:15:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:26.309 10:15:59 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:26.309 10:15:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:26.309 10:15:59 -- common/autotest_common.sh@10 -- # set +x 00:19:26.309 Malloc0 00:19:26.309 10:15:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:26.309 10:15:59 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:26.309 10:15:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:26.309 10:15:59 -- common/autotest_common.sh@10 -- # set +x 00:19:26.309 10:15:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:26.309 10:15:59 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:26.309 10:15:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:26.309 10:15:59 -- common/autotest_common.sh@10 -- # set +x 00:19:26.309 10:15:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:26.309 10:15:59 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:26.309 10:15:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:26.309 10:15:59 -- common/autotest_common.sh@10 -- # set +x 00:19:26.309 [2024-04-17 10:15:59.602715] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.309 10:15:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:26.309 10:15:59 -- target/queue_depth.sh@30 -- # bdevperf_pid=3447818 00:19:26.309 10:15:59 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:19:26.309 10:15:59 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:26.309 10:15:59 -- target/queue_depth.sh@33 -- # waitforlisten 3447818 /var/tmp/bdevperf.sock 00:19:26.309 10:15:59 -- common/autotest_common.sh@819 -- # '[' -z 3447818 ']' 00:19:26.309 10:15:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:26.309 10:15:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:26.309 10:15:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:26.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:26.309 10:15:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:26.309 10:15:59 -- common/autotest_common.sh@10 -- # set +x 00:19:26.567 [2024-04-17 10:15:59.654045] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:26.567 [2024-04-17 10:15:59.654102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3447818 ] 00:19:26.567 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.567 [2024-04-17 10:15:59.736777] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.568 [2024-04-17 10:15:59.825638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.499 10:16:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:27.499 10:16:00 -- common/autotest_common.sh@852 -- # return 0 00:19:27.499 10:16:00 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:27.499 10:16:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:27.499 10:16:00 -- common/autotest_common.sh@10 -- # set +x 00:19:27.499 NVMe0n1 00:19:27.499 10:16:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:27.499 10:16:00 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:27.756 Running I/O for 10 seconds... 00:19:37.714 00:19:37.714 Latency(us) 00:19:37.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.714 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:19:37.714 Verification LBA range: start 0x0 length 0x4000 00:19:37.714 NVMe0n1 : 10.07 12149.17 47.46 0.00 0.00 83927.74 17158.52 61484.68 00:19:37.714 =================================================================================================================== 00:19:37.714 Total : 12149.17 47.46 0.00 0.00 83927.74 17158.52 61484.68 00:19:37.714 0 00:19:37.714 10:16:11 -- target/queue_depth.sh@39 -- # killprocess 3447818 00:19:37.714 10:16:11 -- common/autotest_common.sh@926 -- # '[' -z 3447818 ']' 00:19:37.714 10:16:11 -- common/autotest_common.sh@930 -- # kill -0 3447818 00:19:37.972 10:16:11 -- common/autotest_common.sh@931 -- # uname 00:19:37.972 10:16:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:37.972 10:16:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3447818 00:19:37.972 10:16:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:37.972 10:16:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:37.972 10:16:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3447818' 00:19:37.972 killing process with pid 3447818 00:19:37.972 10:16:11 -- common/autotest_common.sh@945 -- # kill 3447818 00:19:37.972 Received shutdown signal, test time was about 10.000000 seconds 00:19:37.972 00:19:37.972 Latency(us) 00:19:37.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.972 =================================================================================================================== 00:19:37.972 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:37.972 10:16:11 -- common/autotest_common.sh@950 -- # wait 3447818 00:19:38.230 10:16:11 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:38.230 10:16:11 -- target/queue_depth.sh@43 -- # nvmftestfini 00:19:38.230 10:16:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:38.230 10:16:11 -- nvmf/common.sh@116 -- # sync 00:19:38.230 10:16:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:38.230 10:16:11 -- nvmf/common.sh@119 -- # set +e 00:19:38.230 10:16:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:38.230 10:16:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:38.230 rmmod nvme_tcp 00:19:38.230 rmmod nvme_fabrics 00:19:38.230 rmmod nvme_keyring 00:19:38.230 10:16:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:38.230 10:16:11 -- nvmf/common.sh@123 -- # set -e 00:19:38.230 10:16:11 -- nvmf/common.sh@124 -- # return 0 00:19:38.230 10:16:11 -- nvmf/common.sh@477 -- # '[' -n 3447787 ']' 00:19:38.230 10:16:11 -- nvmf/common.sh@478 -- # killprocess 3447787 00:19:38.230 10:16:11 -- common/autotest_common.sh@926 -- # '[' -z 3447787 ']' 00:19:38.230 10:16:11 -- common/autotest_common.sh@930 -- # kill -0 3447787 00:19:38.230 10:16:11 -- common/autotest_common.sh@931 -- # uname 00:19:38.230 10:16:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:38.230 10:16:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3447787 00:19:38.230 10:16:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:38.230 10:16:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:38.230 10:16:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3447787' 00:19:38.230 killing process with pid 3447787 00:19:38.230 10:16:11 -- common/autotest_common.sh@945 -- # kill 3447787 00:19:38.230 10:16:11 -- common/autotest_common.sh@950 -- # wait 3447787 00:19:38.488 10:16:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:38.488 10:16:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:38.488 10:16:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:38.488 10:16:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:38.488 10:16:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:38.488 10:16:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.488 10:16:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:38.488 10:16:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.018 10:16:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:41.018 00:19:41.018 real 0m20.956s 00:19:41.018 user 0m25.859s 00:19:41.018 sys 0m5.799s 00:19:41.018 10:16:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:41.018 10:16:13 -- common/autotest_common.sh@10 -- # set +x 00:19:41.018 ************************************ 00:19:41.018 END TEST nvmf_queue_depth 00:19:41.018 ************************************ 00:19:41.018 10:16:13 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:41.018 10:16:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:41.018 10:16:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:41.018 10:16:13 -- common/autotest_common.sh@10 -- # set +x 00:19:41.018 ************************************ 00:19:41.018 START TEST nvmf_multipath 00:19:41.018 ************************************ 00:19:41.018 10:16:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:41.018 * Looking for test storage... 00:19:41.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:41.018 10:16:13 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:41.018 10:16:13 -- nvmf/common.sh@7 -- # uname -s 00:19:41.018 10:16:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.018 10:16:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.018 10:16:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.018 10:16:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.018 10:16:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.018 10:16:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.018 10:16:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.018 10:16:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.018 10:16:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.018 10:16:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.018 10:16:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:41.018 10:16:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:19:41.018 10:16:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.018 10:16:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.018 10:16:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:41.018 10:16:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:41.018 10:16:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.018 10:16:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.018 10:16:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.018 10:16:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.018 10:16:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.018 10:16:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.018 10:16:13 -- paths/export.sh@5 -- # export PATH 00:19:41.018 10:16:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.018 10:16:13 -- nvmf/common.sh@46 -- # : 0 00:19:41.018 10:16:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:41.018 10:16:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:41.018 10:16:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:41.018 10:16:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.018 10:16:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.018 10:16:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:41.018 10:16:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:41.019 10:16:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:41.019 10:16:13 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:41.019 10:16:13 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:41.019 10:16:13 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:41.019 10:16:13 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:41.019 10:16:13 -- target/multipath.sh@43 -- # nvmftestinit 00:19:41.019 10:16:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:41.019 10:16:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.019 10:16:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:41.019 10:16:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:41.019 10:16:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:41.019 10:16:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.019 10:16:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.019 10:16:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.019 10:16:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:41.019 10:16:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:41.019 10:16:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:41.019 10:16:13 -- common/autotest_common.sh@10 -- # set +x 00:19:46.279 10:16:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:46.279 10:16:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:46.279 10:16:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:46.279 10:16:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:46.279 10:16:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:46.279 10:16:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:46.279 10:16:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:46.279 10:16:19 -- nvmf/common.sh@294 -- # net_devs=() 00:19:46.279 10:16:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:46.279 10:16:19 -- nvmf/common.sh@295 -- # e810=() 00:19:46.279 10:16:19 -- nvmf/common.sh@295 -- # local -ga e810 00:19:46.279 10:16:19 -- nvmf/common.sh@296 -- # x722=() 00:19:46.279 10:16:19 -- nvmf/common.sh@296 -- # local -ga x722 00:19:46.279 10:16:19 -- nvmf/common.sh@297 -- # mlx=() 00:19:46.279 10:16:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:46.279 10:16:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:46.279 10:16:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:46.279 10:16:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:46.279 10:16:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:46.279 10:16:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:46.279 10:16:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:46.279 10:16:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:46.279 10:16:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:46.279 10:16:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:46.279 10:16:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:46.279 10:16:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:46.279 10:16:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:46.279 10:16:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:46.279 10:16:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:46.279 10:16:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:46.279 10:16:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:46.279 10:16:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:46.279 10:16:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:46.279 10:16:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:46.279 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:46.279 10:16:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:46.279 10:16:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:46.279 10:16:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.279 10:16:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.279 10:16:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:46.279 10:16:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:46.279 10:16:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:46.279 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:46.279 10:16:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:46.279 10:16:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:46.279 10:16:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.279 10:16:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.279 10:16:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:46.279 10:16:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:46.279 10:16:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:46.279 10:16:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:46.279 10:16:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:46.279 10:16:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.279 10:16:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:46.279 10:16:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.279 10:16:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:46.279 Found net devices under 0000:af:00.0: cvl_0_0 00:19:46.279 10:16:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.279 10:16:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:46.279 10:16:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.280 10:16:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:46.280 10:16:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.280 10:16:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:46.280 Found net devices under 0000:af:00.1: cvl_0_1 00:19:46.280 10:16:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.280 10:16:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:46.280 10:16:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:46.280 10:16:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:46.280 10:16:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:46.280 10:16:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:46.280 10:16:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:46.280 10:16:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:46.280 10:16:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:46.280 10:16:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:46.280 10:16:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:46.280 10:16:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:46.280 10:16:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:46.280 10:16:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:46.280 10:16:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:46.280 10:16:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:46.280 10:16:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:46.280 10:16:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:46.280 10:16:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:46.280 10:16:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:46.280 10:16:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:46.280 10:16:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:46.280 10:16:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:46.280 10:16:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:46.280 10:16:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:46.280 10:16:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:46.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:46.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:19:46.280 00:19:46.280 --- 10.0.0.2 ping statistics --- 00:19:46.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.280 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:19:46.280 10:16:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:46.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:46.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:19:46.538 00:19:46.538 --- 10.0.0.1 ping statistics --- 00:19:46.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.538 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:19:46.538 10:16:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:46.538 10:16:19 -- nvmf/common.sh@410 -- # return 0 00:19:46.538 10:16:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:46.538 10:16:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:46.538 10:16:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:46.538 10:16:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:46.538 10:16:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:46.538 10:16:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:46.538 10:16:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:46.538 10:16:19 -- target/multipath.sh@45 -- # '[' -z ']' 00:19:46.538 10:16:19 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:19:46.538 only one NIC for nvmf test 00:19:46.538 10:16:19 -- target/multipath.sh@47 -- # nvmftestfini 00:19:46.538 10:16:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:46.538 10:16:19 -- nvmf/common.sh@116 -- # sync 00:19:46.538 10:16:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:46.538 10:16:19 -- nvmf/common.sh@119 -- # set +e 00:19:46.538 10:16:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:46.538 10:16:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:46.538 rmmod nvme_tcp 00:19:46.538 rmmod nvme_fabrics 00:19:46.538 rmmod nvme_keyring 00:19:46.538 10:16:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:46.538 10:16:19 -- nvmf/common.sh@123 -- # set -e 00:19:46.538 10:16:19 -- nvmf/common.sh@124 -- # return 0 00:19:46.538 10:16:19 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:46.538 10:16:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:46.538 10:16:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:46.538 10:16:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:46.538 10:16:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:46.538 10:16:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:46.538 10:16:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.538 10:16:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:46.538 10:16:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.438 10:16:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:48.438 10:16:21 -- target/multipath.sh@48 -- # exit 0 00:19:48.438 10:16:21 -- target/multipath.sh@1 -- # nvmftestfini 00:19:48.438 10:16:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:48.438 10:16:21 -- nvmf/common.sh@116 -- # sync 00:19:48.438 10:16:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:48.438 10:16:21 -- nvmf/common.sh@119 -- # set +e 00:19:48.438 10:16:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:48.438 10:16:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:48.696 10:16:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:48.696 10:16:21 -- nvmf/common.sh@123 -- # set -e 00:19:48.696 10:16:21 -- nvmf/common.sh@124 -- # return 0 00:19:48.696 10:16:21 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:48.696 10:16:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:48.696 10:16:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:48.696 10:16:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:48.696 10:16:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:48.696 10:16:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:48.696 10:16:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.696 10:16:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:48.696 10:16:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.696 10:16:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:48.696 00:19:48.696 real 0m8.012s 00:19:48.696 user 0m1.617s 00:19:48.696 sys 0m4.374s 00:19:48.696 10:16:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:48.696 10:16:21 -- common/autotest_common.sh@10 -- # set +x 00:19:48.696 ************************************ 00:19:48.696 END TEST nvmf_multipath 00:19:48.696 ************************************ 00:19:48.696 10:16:21 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:48.696 10:16:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:48.696 10:16:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:48.696 10:16:21 -- common/autotest_common.sh@10 -- # set +x 00:19:48.696 ************************************ 00:19:48.696 START TEST nvmf_zcopy 00:19:48.696 ************************************ 00:19:48.696 10:16:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:48.696 * Looking for test storage... 00:19:48.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:48.696 10:16:21 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:48.696 10:16:21 -- nvmf/common.sh@7 -- # uname -s 00:19:48.696 10:16:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:48.696 10:16:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:48.696 10:16:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:48.696 10:16:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:48.696 10:16:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:48.696 10:16:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:48.696 10:16:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:48.696 10:16:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:48.696 10:16:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:48.696 10:16:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:48.696 10:16:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:48.696 10:16:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:19:48.696 10:16:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:48.696 10:16:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:48.696 10:16:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:48.696 10:16:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:48.696 10:16:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:48.696 10:16:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:48.696 10:16:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:48.696 10:16:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.696 10:16:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.696 10:16:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.696 10:16:21 -- paths/export.sh@5 -- # export PATH 00:19:48.697 10:16:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.697 10:16:21 -- nvmf/common.sh@46 -- # : 0 00:19:48.697 10:16:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:48.697 10:16:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:48.697 10:16:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:48.697 10:16:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:48.697 10:16:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:48.697 10:16:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:48.697 10:16:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:48.697 10:16:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:48.697 10:16:21 -- target/zcopy.sh@12 -- # nvmftestinit 00:19:48.697 10:16:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:48.697 10:16:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:48.697 10:16:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:48.697 10:16:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:48.697 10:16:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:48.697 10:16:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.697 10:16:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:48.697 10:16:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.697 10:16:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:48.697 10:16:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:48.697 10:16:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:48.697 10:16:21 -- common/autotest_common.sh@10 -- # set +x 00:19:55.350 10:16:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:55.350 10:16:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:55.350 10:16:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:55.350 10:16:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:55.350 10:16:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:55.350 10:16:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:55.350 10:16:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:55.350 10:16:27 -- nvmf/common.sh@294 -- # net_devs=() 00:19:55.350 10:16:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:55.350 10:16:27 -- nvmf/common.sh@295 -- # e810=() 00:19:55.350 10:16:27 -- nvmf/common.sh@295 -- # local -ga e810 00:19:55.350 10:16:27 -- nvmf/common.sh@296 -- # x722=() 00:19:55.350 10:16:27 -- nvmf/common.sh@296 -- # local -ga x722 00:19:55.350 10:16:27 -- nvmf/common.sh@297 -- # mlx=() 00:19:55.350 10:16:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:55.350 10:16:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:55.350 10:16:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:55.350 10:16:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:55.350 10:16:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:55.350 10:16:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:55.350 10:16:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:55.350 10:16:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:55.350 10:16:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:55.350 10:16:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:55.350 10:16:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:55.350 10:16:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:55.350 10:16:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:55.350 10:16:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:55.350 10:16:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:55.350 10:16:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:55.350 10:16:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:55.350 10:16:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:55.350 10:16:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:55.350 10:16:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:55.350 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:55.350 10:16:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:55.350 10:16:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:55.350 10:16:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.350 10:16:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.350 10:16:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:55.350 10:16:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:55.350 10:16:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:55.350 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:55.350 10:16:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:55.350 10:16:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:55.350 10:16:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.350 10:16:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.350 10:16:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:55.350 10:16:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:55.350 10:16:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:55.350 10:16:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:55.350 10:16:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:55.350 10:16:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.350 10:16:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:55.350 10:16:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.350 10:16:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:55.350 Found net devices under 0000:af:00.0: cvl_0_0 00:19:55.350 10:16:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.350 10:16:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:55.350 10:16:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.350 10:16:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:55.350 10:16:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.350 10:16:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:55.350 Found net devices under 0000:af:00.1: cvl_0_1 00:19:55.350 10:16:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.350 10:16:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:55.350 10:16:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:55.350 10:16:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:55.350 10:16:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:55.350 10:16:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:55.350 10:16:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:55.350 10:16:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:55.350 10:16:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:55.350 10:16:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:55.350 10:16:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:55.350 10:16:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:55.350 10:16:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:55.350 10:16:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:55.350 10:16:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:55.350 10:16:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:55.350 10:16:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:55.350 10:16:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:55.350 10:16:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:55.350 10:16:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:55.350 10:16:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:55.350 10:16:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:55.350 10:16:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:55.350 10:16:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:55.350 10:16:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:55.350 10:16:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:55.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:55.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:19:55.350 00:19:55.350 --- 10.0.0.2 ping statistics --- 00:19:55.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.350 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:19:55.350 10:16:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:55.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:55.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:19:55.350 00:19:55.350 --- 10.0.0.1 ping statistics --- 00:19:55.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.350 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:19:55.350 10:16:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:55.350 10:16:27 -- nvmf/common.sh@410 -- # return 0 00:19:55.350 10:16:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:55.350 10:16:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:55.350 10:16:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:55.350 10:16:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:55.350 10:16:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:55.350 10:16:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:55.350 10:16:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:55.350 10:16:27 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:55.350 10:16:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:55.350 10:16:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:55.350 10:16:27 -- common/autotest_common.sh@10 -- # set +x 00:19:55.350 10:16:27 -- nvmf/common.sh@469 -- # nvmfpid=3457153 00:19:55.350 10:16:27 -- nvmf/common.sh@470 -- # waitforlisten 3457153 00:19:55.350 10:16:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:55.350 10:16:27 -- common/autotest_common.sh@819 -- # '[' -z 3457153 ']' 00:19:55.350 10:16:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.350 10:16:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:55.350 10:16:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.350 10:16:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:55.350 10:16:27 -- common/autotest_common.sh@10 -- # set +x 00:19:55.350 [2024-04-17 10:16:27.830248] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:55.350 [2024-04-17 10:16:27.830308] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.350 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.350 [2024-04-17 10:16:27.910850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.350 [2024-04-17 10:16:27.995814] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:55.350 [2024-04-17 10:16:27.995960] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.350 [2024-04-17 10:16:27.995971] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.350 [2024-04-17 10:16:27.995981] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.350 [2024-04-17 10:16:27.996001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.351 10:16:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:55.351 10:16:28 -- common/autotest_common.sh@852 -- # return 0 00:19:55.351 10:16:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:55.351 10:16:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:55.351 10:16:28 -- common/autotest_common.sh@10 -- # set +x 00:19:55.608 10:16:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.608 10:16:28 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:19:55.608 10:16:28 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:19:55.608 10:16:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.608 10:16:28 -- common/autotest_common.sh@10 -- # set +x 00:19:55.608 [2024-04-17 10:16:28.704310] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.608 10:16:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.608 10:16:28 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:55.608 10:16:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.608 10:16:28 -- common/autotest_common.sh@10 -- # set +x 00:19:55.608 10:16:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.608 10:16:28 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:55.608 10:16:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.608 10:16:28 -- common/autotest_common.sh@10 -- # set +x 00:19:55.608 [2024-04-17 10:16:28.720468] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.608 10:16:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.608 10:16:28 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:55.608 10:16:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.608 10:16:28 -- common/autotest_common.sh@10 -- # set +x 00:19:55.608 10:16:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.608 10:16:28 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:19:55.608 10:16:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.608 10:16:28 -- common/autotest_common.sh@10 -- # set +x 00:19:55.608 malloc0 00:19:55.608 10:16:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.608 10:16:28 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:55.608 10:16:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.609 10:16:28 -- common/autotest_common.sh@10 -- # set +x 00:19:55.609 10:16:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.609 10:16:28 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:19:55.609 10:16:28 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:19:55.609 10:16:28 -- nvmf/common.sh@520 -- # config=() 00:19:55.609 10:16:28 -- nvmf/common.sh@520 -- # local subsystem config 00:19:55.609 10:16:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:55.609 10:16:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:55.609 { 00:19:55.609 "params": { 00:19:55.609 "name": "Nvme$subsystem", 00:19:55.609 "trtype": "$TEST_TRANSPORT", 00:19:55.609 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.609 "adrfam": "ipv4", 00:19:55.609 "trsvcid": "$NVMF_PORT", 00:19:55.609 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.609 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.609 "hdgst": ${hdgst:-false}, 00:19:55.609 "ddgst": ${ddgst:-false} 00:19:55.609 }, 00:19:55.609 "method": "bdev_nvme_attach_controller" 00:19:55.609 } 00:19:55.609 EOF 00:19:55.609 )") 00:19:55.609 10:16:28 -- nvmf/common.sh@542 -- # cat 00:19:55.609 10:16:28 -- nvmf/common.sh@544 -- # jq . 00:19:55.609 10:16:28 -- nvmf/common.sh@545 -- # IFS=, 00:19:55.609 10:16:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:55.609 "params": { 00:19:55.609 "name": "Nvme1", 00:19:55.609 "trtype": "tcp", 00:19:55.609 "traddr": "10.0.0.2", 00:19:55.609 "adrfam": "ipv4", 00:19:55.609 "trsvcid": "4420", 00:19:55.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.609 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:55.609 "hdgst": false, 00:19:55.609 "ddgst": false 00:19:55.609 }, 00:19:55.609 "method": "bdev_nvme_attach_controller" 00:19:55.609 }' 00:19:55.609 [2024-04-17 10:16:28.800516] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:55.609 [2024-04-17 10:16:28.800573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3457432 ] 00:19:55.609 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.609 [2024-04-17 10:16:28.882823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.866 [2024-04-17 10:16:28.968025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.123 Running I/O for 10 seconds... 00:20:06.079 00:20:06.079 Latency(us) 00:20:06.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.079 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:20:06.079 Verification LBA range: start 0x0 length 0x1000 00:20:06.079 Nvme1n1 : 10.01 8585.04 67.07 0.00 0.00 14870.89 1243.69 21209.83 00:20:06.079 =================================================================================================================== 00:20:06.079 Total : 8585.04 67.07 0.00 0.00 14870.89 1243.69 21209.83 00:20:06.337 10:16:39 -- target/zcopy.sh@39 -- # perfpid=3459296 00:20:06.337 10:16:39 -- target/zcopy.sh@41 -- # xtrace_disable 00:20:06.337 10:16:39 -- common/autotest_common.sh@10 -- # set +x 00:20:06.337 10:16:39 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:20:06.337 10:16:39 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:20:06.337 10:16:39 -- nvmf/common.sh@520 -- # config=() 00:20:06.337 10:16:39 -- nvmf/common.sh@520 -- # local subsystem config 00:20:06.337 10:16:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:06.337 10:16:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:06.337 { 00:20:06.337 "params": { 00:20:06.337 "name": "Nvme$subsystem", 00:20:06.337 "trtype": "$TEST_TRANSPORT", 00:20:06.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:06.337 "adrfam": "ipv4", 00:20:06.337 "trsvcid": "$NVMF_PORT", 00:20:06.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:06.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:06.337 "hdgst": ${hdgst:-false}, 00:20:06.337 "ddgst": ${ddgst:-false} 00:20:06.337 }, 00:20:06.337 "method": "bdev_nvme_attach_controller" 00:20:06.337 } 00:20:06.337 EOF 00:20:06.337 )") 00:20:06.337 [2024-04-17 10:16:39.518988] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.337 [2024-04-17 10:16:39.519025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.337 10:16:39 -- nvmf/common.sh@542 -- # cat 00:20:06.337 10:16:39 -- nvmf/common.sh@544 -- # jq . 00:20:06.337 [2024-04-17 10:16:39.526970] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.337 [2024-04-17 10:16:39.526986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.337 10:16:39 -- nvmf/common.sh@545 -- # IFS=, 00:20:06.337 10:16:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:06.337 "params": { 00:20:06.337 "name": "Nvme1", 00:20:06.337 "trtype": "tcp", 00:20:06.337 "traddr": "10.0.0.2", 00:20:06.337 "adrfam": "ipv4", 00:20:06.337 "trsvcid": "4420", 00:20:06.337 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.337 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:06.337 "hdgst": false, 00:20:06.337 "ddgst": false 00:20:06.337 }, 00:20:06.337 "method": "bdev_nvme_attach_controller" 00:20:06.337 }' 00:20:06.337 [2024-04-17 10:16:39.534990] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.337 [2024-04-17 10:16:39.535004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.337 [2024-04-17 10:16:39.543014] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.337 [2024-04-17 10:16:39.543028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.337 [2024-04-17 10:16:39.551037] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.337 [2024-04-17 10:16:39.551049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.337 [2024-04-17 10:16:39.559060] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.337 [2024-04-17 10:16:39.559073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.337 [2024-04-17 10:16:39.559783] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:06.337 [2024-04-17 10:16:39.559838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3459296 ] 00:20:06.337 [2024-04-17 10:16:39.567082] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.337 [2024-04-17 10:16:39.567095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.337 [2024-04-17 10:16:39.575103] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.337 [2024-04-17 10:16:39.575117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.337 [2024-04-17 10:16:39.583124] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.337 [2024-04-17 10:16:39.583143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.337 [2024-04-17 10:16:39.591146] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.337 [2024-04-17 10:16:39.591159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.337 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.337 [2024-04-17 10:16:39.599171] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.337 [2024-04-17 10:16:39.599185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.337 [2024-04-17 10:16:39.607193] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.337 [2024-04-17 10:16:39.607208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.337 [2024-04-17 10:16:39.615215] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.337 [2024-04-17 10:16:39.615228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.337 [2024-04-17 10:16:39.623240] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.337 [2024-04-17 10:16:39.623254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.338 [2024-04-17 10:16:39.631261] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.338 [2024-04-17 10:16:39.631274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.338 [2024-04-17 10:16:39.639281] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.338 [2024-04-17 10:16:39.639294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.338 [2024-04-17 10:16:39.642168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.338 [2024-04-17 10:16:39.647304] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.338 [2024-04-17 10:16:39.647318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.338 [2024-04-17 10:16:39.655329] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.338 [2024-04-17 10:16:39.655342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.338 [2024-04-17 10:16:39.663349] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.338 [2024-04-17 10:16:39.663362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.671388] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.671414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.679400] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.679420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.687416] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.687433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.695438] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.695460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.703459] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.703473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.711483] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.711496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.719505] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.719518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.727525] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.727539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.728554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.597 [2024-04-17 10:16:39.735547] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.735562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.743576] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.743596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.751593] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.751610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.759616] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.759632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.767639] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.767660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.775667] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.775682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.783690] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.783703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.791713] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.791728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.799735] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.799750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.807758] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.807771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.815796] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.815819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.823814] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.823831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.831834] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.831851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.839860] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.839879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.847875] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.847889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.855896] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.855909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.863921] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.863935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.871949] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.871966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.879974] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.879996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.887999] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.888016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.896026] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.896044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.904048] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.904062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.912137] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.912159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 [2024-04-17 10:16:39.920123] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.920139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.597 Running I/O for 5 seconds... 00:20:06.597 [2024-04-17 10:16:39.928152] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.597 [2024-04-17 10:16:39.928175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.856 [2024-04-17 10:16:39.942277] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.856 [2024-04-17 10:16:39.942305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.856 [2024-04-17 10:16:39.952958] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.856 [2024-04-17 10:16:39.952982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.856 [2024-04-17 10:16:39.963781] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.856 [2024-04-17 10:16:39.963803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.856 [2024-04-17 10:16:39.974579] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.856 [2024-04-17 10:16:39.974603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.856 [2024-04-17 10:16:39.985620] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.856 [2024-04-17 10:16:39.985650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.856 [2024-04-17 10:16:39.996858] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.856 [2024-04-17 10:16:39.996882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.856 [2024-04-17 10:16:40.008206] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.856 [2024-04-17 10:16:40.008233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.856 [2024-04-17 10:16:40.019084] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.856 [2024-04-17 10:16:40.019109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.856 [2024-04-17 10:16:40.031216] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.856 [2024-04-17 10:16:40.031242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.856 [2024-04-17 10:16:40.042056] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.856 [2024-04-17 10:16:40.042081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.856 [2024-04-17 10:16:40.052998] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.856 [2024-04-17 10:16:40.053021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.856 [2024-04-17 10:16:40.070721] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.856 [2024-04-17 10:16:40.070745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.856 [2024-04-17 10:16:40.080689] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.856 [2024-04-17 10:16:40.080713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.856 [2024-04-17 10:16:40.092229] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.856 [2024-04-17 10:16:40.092251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.856 [2024-04-17 10:16:40.107896] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.856 [2024-04-17 10:16:40.107920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.856 [2024-04-17 10:16:40.117998] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.856 [2024-04-17 10:16:40.118021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.856 [2024-04-17 10:16:40.129265] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.856 [2024-04-17 10:16:40.129287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.856 [2024-04-17 10:16:40.142508] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.856 [2024-04-17 10:16:40.142531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.856 [2024-04-17 10:16:40.152194] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.856 [2024-04-17 10:16:40.152217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.856 [2024-04-17 10:16:40.166767] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.856 [2024-04-17 10:16:40.166790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.856 [2024-04-17 10:16:40.176040] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.856 [2024-04-17 10:16:40.176062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:06.856 [2024-04-17 10:16:40.187570] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:06.856 [2024-04-17 10:16:40.187596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.115 [2024-04-17 10:16:40.198454] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.115 [2024-04-17 10:16:40.198480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.115 [2024-04-17 10:16:40.209412] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.115 [2024-04-17 10:16:40.209437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.115 [2024-04-17 10:16:40.226365] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.115 [2024-04-17 10:16:40.226388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.115 [2024-04-17 10:16:40.236293] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.115 [2024-04-17 10:16:40.236316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.115 [2024-04-17 10:16:40.247882] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.115 [2024-04-17 10:16:40.247905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.115 [2024-04-17 10:16:40.258919] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.115 [2024-04-17 10:16:40.258943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.115 [2024-04-17 10:16:40.271786] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.115 [2024-04-17 10:16:40.271808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.115 [2024-04-17 10:16:40.288272] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.115 [2024-04-17 10:16:40.288295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.115 [2024-04-17 10:16:40.298548] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.115 [2024-04-17 10:16:40.298571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.115 [2024-04-17 10:16:40.309078] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.115 [2024-04-17 10:16:40.309101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.115 [2024-04-17 10:16:40.320005] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.115 [2024-04-17 10:16:40.320027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.115 [2024-04-17 10:16:40.331027] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.115 [2024-04-17 10:16:40.331049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.115 [2024-04-17 10:16:40.348720] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.115 [2024-04-17 10:16:40.348746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.115 [2024-04-17 10:16:40.359266] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.115 [2024-04-17 10:16:40.359289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.115 [2024-04-17 10:16:40.369928] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.115 [2024-04-17 10:16:40.369951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.115 [2024-04-17 10:16:40.380699] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.115 [2024-04-17 10:16:40.380722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.115 [2024-04-17 10:16:40.391866] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.115 [2024-04-17 10:16:40.391889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.115 [2024-04-17 10:16:40.409703] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.115 [2024-04-17 10:16:40.409726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.115 [2024-04-17 10:16:40.420435] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.115 [2024-04-17 10:16:40.420458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.115 [2024-04-17 10:16:40.431219] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.115 [2024-04-17 10:16:40.431241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.115 [2024-04-17 10:16:40.442073] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.115 [2024-04-17 10:16:40.442096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.374 [2024-04-17 10:16:40.453234] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.374 [2024-04-17 10:16:40.453263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.374 [2024-04-17 10:16:40.468342] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.374 [2024-04-17 10:16:40.468366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.374 [2024-04-17 10:16:40.477909] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.374 [2024-04-17 10:16:40.477932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.374 [2024-04-17 10:16:40.489202] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.374 [2024-04-17 10:16:40.489225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.374 [2024-04-17 10:16:40.500071] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.374 [2024-04-17 10:16:40.500093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.374 [2024-04-17 10:16:40.512731] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.374 [2024-04-17 10:16:40.512753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.374 [2024-04-17 10:16:40.530327] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.374 [2024-04-17 10:16:40.530355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.374 [2024-04-17 10:16:40.540795] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.374 [2024-04-17 10:16:40.540818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.374 [2024-04-17 10:16:40.551694] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.374 [2024-04-17 10:16:40.551717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.374 [2024-04-17 10:16:40.564592] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.374 [2024-04-17 10:16:40.564615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.374 [2024-04-17 10:16:40.574679] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.374 [2024-04-17 10:16:40.574702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.374 [2024-04-17 10:16:40.589246] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.374 [2024-04-17 10:16:40.589269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.374 [2024-04-17 10:16:40.599558] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.374 [2024-04-17 10:16:40.599582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.374 [2024-04-17 10:16:40.610312] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.374 [2024-04-17 10:16:40.610334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.374 [2024-04-17 10:16:40.622998] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.374 [2024-04-17 10:16:40.623021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.374 [2024-04-17 10:16:40.633004] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.374 [2024-04-17 10:16:40.633026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.374 [2024-04-17 10:16:40.647402] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.374 [2024-04-17 10:16:40.647425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.374 [2024-04-17 10:16:40.657397] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.374 [2024-04-17 10:16:40.657420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.374 [2024-04-17 10:16:40.668029] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.374 [2024-04-17 10:16:40.668052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.374 [2024-04-17 10:16:40.678951] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.374 [2024-04-17 10:16:40.678977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.374 [2024-04-17 10:16:40.691599] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.374 [2024-04-17 10:16:40.691621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.633 [2024-04-17 10:16:40.710012] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.633 [2024-04-17 10:16:40.710038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.633 [2024-04-17 10:16:40.720688] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.633 [2024-04-17 10:16:40.720712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.633 [2024-04-17 10:16:40.731425] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.633 [2024-04-17 10:16:40.731448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.633 [2024-04-17 10:16:40.742286] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.633 [2024-04-17 10:16:40.742308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.633 [2024-04-17 10:16:40.753003] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.633 [2024-04-17 10:16:40.753030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.633 [2024-04-17 10:16:40.769182] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.633 [2024-04-17 10:16:40.769206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.633 [2024-04-17 10:16:40.779036] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.633 [2024-04-17 10:16:40.779059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.633 [2024-04-17 10:16:40.789706] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.633 [2024-04-17 10:16:40.789729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.633 [2024-04-17 10:16:40.800593] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.633 [2024-04-17 10:16:40.800616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.633 [2024-04-17 10:16:40.813794] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.633 [2024-04-17 10:16:40.813816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.633 [2024-04-17 10:16:40.828995] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.633 [2024-04-17 10:16:40.829017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.633 [2024-04-17 10:16:40.839279] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.633 [2024-04-17 10:16:40.839303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.633 [2024-04-17 10:16:40.850010] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.633 [2024-04-17 10:16:40.850033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.633 [2024-04-17 10:16:40.860551] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.633 [2024-04-17 10:16:40.860574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.633 [2024-04-17 10:16:40.871355] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.633 [2024-04-17 10:16:40.871377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.633 [2024-04-17 10:16:40.882238] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.633 [2024-04-17 10:16:40.882261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.633 [2024-04-17 10:16:40.893234] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.633 [2024-04-17 10:16:40.893256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.633 [2024-04-17 10:16:40.905952] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.633 [2024-04-17 10:16:40.905975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.633 [2024-04-17 10:16:40.916081] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.633 [2024-04-17 10:16:40.916104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.633 [2024-04-17 10:16:40.927051] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.633 [2024-04-17 10:16:40.927074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.633 [2024-04-17 10:16:40.940162] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.633 [2024-04-17 10:16:40.940184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.633 [2024-04-17 10:16:40.950237] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.633 [2024-04-17 10:16:40.950260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.633 [2024-04-17 10:16:40.961229] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.633 [2024-04-17 10:16:40.961257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.892 [2024-04-17 10:16:40.974549] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.892 [2024-04-17 10:16:40.974579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.892 [2024-04-17 10:16:40.984979] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.892 [2024-04-17 10:16:40.985003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.892 [2024-04-17 10:16:40.999699] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.892 [2024-04-17 10:16:40.999722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.892 [2024-04-17 10:16:41.009901] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.892 [2024-04-17 10:16:41.009924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.892 [2024-04-17 10:16:41.020729] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.892 [2024-04-17 10:16:41.020751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.892 [2024-04-17 10:16:41.033625] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.892 [2024-04-17 10:16:41.033653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.892 [2024-04-17 10:16:41.045484] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.892 [2024-04-17 10:16:41.045507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.892 [2024-04-17 10:16:41.063407] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.892 [2024-04-17 10:16:41.063431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.892 [2024-04-17 10:16:41.074167] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.892 [2024-04-17 10:16:41.074191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.892 [2024-04-17 10:16:41.085018] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.892 [2024-04-17 10:16:41.085042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.892 [2024-04-17 10:16:41.095856] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.892 [2024-04-17 10:16:41.095880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.892 [2024-04-17 10:16:41.106714] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.892 [2024-04-17 10:16:41.106738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.892 [2024-04-17 10:16:41.121435] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.892 [2024-04-17 10:16:41.121460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.892 [2024-04-17 10:16:41.131497] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.892 [2024-04-17 10:16:41.131521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.892 [2024-04-17 10:16:41.142271] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.892 [2024-04-17 10:16:41.142294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.892 [2024-04-17 10:16:41.155338] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.892 [2024-04-17 10:16:41.155361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.892 [2024-04-17 10:16:41.165275] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.892 [2024-04-17 10:16:41.165298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.892 [2024-04-17 10:16:41.179839] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.892 [2024-04-17 10:16:41.179862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.892 [2024-04-17 10:16:41.189930] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.892 [2024-04-17 10:16:41.189952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.892 [2024-04-17 10:16:41.200915] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.892 [2024-04-17 10:16:41.200943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.892 [2024-04-17 10:16:41.211560] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.892 [2024-04-17 10:16:41.211583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:07.892 [2024-04-17 10:16:41.224833] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:07.892 [2024-04-17 10:16:41.224865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.151 [2024-04-17 10:16:41.240558] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.151 [2024-04-17 10:16:41.240583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.151 [2024-04-17 10:16:41.250327] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.151 [2024-04-17 10:16:41.250351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.151 [2024-04-17 10:16:41.261630] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.151 [2024-04-17 10:16:41.261661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.151 [2024-04-17 10:16:41.272246] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.151 [2024-04-17 10:16:41.272271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.151 [2024-04-17 10:16:41.283231] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.151 [2024-04-17 10:16:41.283254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.151 [2024-04-17 10:16:41.296249] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.151 [2024-04-17 10:16:41.296272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.151 [2024-04-17 10:16:41.306161] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.151 [2024-04-17 10:16:41.306185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.151 [2024-04-17 10:16:41.317122] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.151 [2024-04-17 10:16:41.317143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.151 [2024-04-17 10:16:41.330159] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.151 [2024-04-17 10:16:41.330183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.151 [2024-04-17 10:16:41.340260] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.151 [2024-04-17 10:16:41.340284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.151 [2024-04-17 10:16:41.355062] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.151 [2024-04-17 10:16:41.355086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.151 [2024-04-17 10:16:41.367299] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.151 [2024-04-17 10:16:41.367322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.151 [2024-04-17 10:16:41.376406] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.151 [2024-04-17 10:16:41.376429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.151 [2024-04-17 10:16:41.389550] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.151 [2024-04-17 10:16:41.389572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.151 [2024-04-17 10:16:41.400284] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.151 [2024-04-17 10:16:41.400309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.151 [2024-04-17 10:16:41.415222] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.151 [2024-04-17 10:16:41.415246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.151 [2024-04-17 10:16:41.424339] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.151 [2024-04-17 10:16:41.424367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.151 [2024-04-17 10:16:41.437651] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.151 [2024-04-17 10:16:41.437675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.151 [2024-04-17 10:16:41.448136] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.151 [2024-04-17 10:16:41.448158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.151 [2024-04-17 10:16:41.459132] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.152 [2024-04-17 10:16:41.459154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.152 [2024-04-17 10:16:41.472270] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.152 [2024-04-17 10:16:41.472293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.152 [2024-04-17 10:16:41.482081] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.152 [2024-04-17 10:16:41.482107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.410 [2024-04-17 10:16:41.493735] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.410 [2024-04-17 10:16:41.493761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.410 [2024-04-17 10:16:41.504719] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.410 [2024-04-17 10:16:41.504744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.410 [2024-04-17 10:16:41.515596] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.411 [2024-04-17 10:16:41.515618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.411 [2024-04-17 10:16:41.528460] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.411 [2024-04-17 10:16:41.528482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.411 [2024-04-17 10:16:41.538817] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.411 [2024-04-17 10:16:41.538839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.411 [2024-04-17 10:16:41.549936] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.411 [2024-04-17 10:16:41.549960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.411 [2024-04-17 10:16:41.561168] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.411 [2024-04-17 10:16:41.561191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.411 [2024-04-17 10:16:41.572362] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.411 [2024-04-17 10:16:41.572384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.411 [2024-04-17 10:16:41.586895] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.411 [2024-04-17 10:16:41.586919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.411 [2024-04-17 10:16:41.597034] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.411 [2024-04-17 10:16:41.597058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.411 [2024-04-17 10:16:41.607810] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.411 [2024-04-17 10:16:41.607833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.411 [2024-04-17 10:16:41.618706] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.411 [2024-04-17 10:16:41.618729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.411 [2024-04-17 10:16:41.629328] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.411 [2024-04-17 10:16:41.629350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.411 [2024-04-17 10:16:41.640071] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.411 [2024-04-17 10:16:41.640094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.411 [2024-04-17 10:16:41.651031] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.411 [2024-04-17 10:16:41.651053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.411 [2024-04-17 10:16:41.662061] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.411 [2024-04-17 10:16:41.662083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.411 [2024-04-17 10:16:41.672835] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.411 [2024-04-17 10:16:41.672857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.411 [2024-04-17 10:16:41.686021] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.411 [2024-04-17 10:16:41.686044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.411 [2024-04-17 10:16:41.702612] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.411 [2024-04-17 10:16:41.702635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.411 [2024-04-17 10:16:41.712773] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.411 [2024-04-17 10:16:41.712796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.411 [2024-04-17 10:16:41.723530] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.411 [2024-04-17 10:16:41.723552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.411 [2024-04-17 10:16:41.736638] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.411 [2024-04-17 10:16:41.736676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.670 [2024-04-17 10:16:41.747271] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.670 [2024-04-17 10:16:41.747297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.670 [2024-04-17 10:16:41.762812] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.670 [2024-04-17 10:16:41.762836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.670 [2024-04-17 10:16:41.773446] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.670 [2024-04-17 10:16:41.773469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.670 [2024-04-17 10:16:41.784215] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.670 [2024-04-17 10:16:41.784237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.670 [2024-04-17 10:16:41.794837] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.670 [2024-04-17 10:16:41.794859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.670 [2024-04-17 10:16:41.805638] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.670 [2024-04-17 10:16:41.805665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.670 [2024-04-17 10:16:41.816548] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.670 [2024-04-17 10:16:41.816571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.670 [2024-04-17 10:16:41.827222] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.670 [2024-04-17 10:16:41.827244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.670 [2024-04-17 10:16:41.838224] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.670 [2024-04-17 10:16:41.838247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.670 [2024-04-17 10:16:41.849119] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.670 [2024-04-17 10:16:41.849142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.670 [2024-04-17 10:16:41.859907] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.670 [2024-04-17 10:16:41.859929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.670 [2024-04-17 10:16:41.876583] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.670 [2024-04-17 10:16:41.876605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.670 [2024-04-17 10:16:41.886790] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.670 [2024-04-17 10:16:41.886813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.670 [2024-04-17 10:16:41.897700] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.670 [2024-04-17 10:16:41.897722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.670 [2024-04-17 10:16:41.910489] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.670 [2024-04-17 10:16:41.910510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.670 [2024-04-17 10:16:41.920578] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.670 [2024-04-17 10:16:41.920601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.670 [2024-04-17 10:16:41.935112] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.670 [2024-04-17 10:16:41.935135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.670 [2024-04-17 10:16:41.945184] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.670 [2024-04-17 10:16:41.945207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.670 [2024-04-17 10:16:41.955840] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.670 [2024-04-17 10:16:41.955863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.670 [2024-04-17 10:16:41.966611] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.670 [2024-04-17 10:16:41.966632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.670 [2024-04-17 10:16:41.979311] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.670 [2024-04-17 10:16:41.979334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.670 [2024-04-17 10:16:41.997149] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.670 [2024-04-17 10:16:41.997173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.929 [2024-04-17 10:16:42.007864] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.929 [2024-04-17 10:16:42.007889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.929 [2024-04-17 10:16:42.018670] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.929 [2024-04-17 10:16:42.018694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.929 [2024-04-17 10:16:42.029740] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.929 [2024-04-17 10:16:42.029763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.929 [2024-04-17 10:16:42.040303] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.929 [2024-04-17 10:16:42.040326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.929 [2024-04-17 10:16:42.055783] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.929 [2024-04-17 10:16:42.055806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.929 [2024-04-17 10:16:42.066153] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.929 [2024-04-17 10:16:42.066175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.929 [2024-04-17 10:16:42.077555] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.929 [2024-04-17 10:16:42.077577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.929 [2024-04-17 10:16:42.088569] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.929 [2024-04-17 10:16:42.088591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.929 [2024-04-17 10:16:42.101567] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.929 [2024-04-17 10:16:42.101589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.929 [2024-04-17 10:16:42.119079] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.929 [2024-04-17 10:16:42.119102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.929 [2024-04-17 10:16:42.129691] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.929 [2024-04-17 10:16:42.129714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.929 [2024-04-17 10:16:42.140330] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.929 [2024-04-17 10:16:42.140353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.929 [2024-04-17 10:16:42.151182] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.929 [2024-04-17 10:16:42.151204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.929 [2024-04-17 10:16:42.163968] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.929 [2024-04-17 10:16:42.163990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.929 [2024-04-17 10:16:42.181671] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.929 [2024-04-17 10:16:42.181694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.929 [2024-04-17 10:16:42.192115] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.929 [2024-04-17 10:16:42.192137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.929 [2024-04-17 10:16:42.203187] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.929 [2024-04-17 10:16:42.203209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.929 [2024-04-17 10:16:42.215704] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.929 [2024-04-17 10:16:42.215727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.929 [2024-04-17 10:16:42.225294] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.929 [2024-04-17 10:16:42.225317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.929 [2024-04-17 10:16:42.241584] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.929 [2024-04-17 10:16:42.241607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:08.929 [2024-04-17 10:16:42.251581] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:08.929 [2024-04-17 10:16:42.251603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.187 [2024-04-17 10:16:42.262332] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.187 [2024-04-17 10:16:42.262357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.187 [2024-04-17 10:16:42.275239] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.187 [2024-04-17 10:16:42.275263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.187 [2024-04-17 10:16:42.285077] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.187 [2024-04-17 10:16:42.285101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.187 [2024-04-17 10:16:42.299374] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.187 [2024-04-17 10:16:42.299398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.187 [2024-04-17 10:16:42.308366] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.187 [2024-04-17 10:16:42.308393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.187 [2024-04-17 10:16:42.321667] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.187 [2024-04-17 10:16:42.321689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.187 [2024-04-17 10:16:42.332384] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.187 [2024-04-17 10:16:42.332407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.187 [2024-04-17 10:16:42.343470] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.187 [2024-04-17 10:16:42.343492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.187 [2024-04-17 10:16:42.356612] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.187 [2024-04-17 10:16:42.356634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.187 [2024-04-17 10:16:42.366861] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.187 [2024-04-17 10:16:42.366884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.187 [2024-04-17 10:16:42.377966] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.187 [2024-04-17 10:16:42.377988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.187 [2024-04-17 10:16:42.390693] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.187 [2024-04-17 10:16:42.390715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.187 [2024-04-17 10:16:42.400662] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.187 [2024-04-17 10:16:42.400685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.187 [2024-04-17 10:16:42.416129] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.187 [2024-04-17 10:16:42.416152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.187 [2024-04-17 10:16:42.426716] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.188 [2024-04-17 10:16:42.426739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.188 [2024-04-17 10:16:42.437761] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.188 [2024-04-17 10:16:42.437783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.188 [2024-04-17 10:16:42.448649] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.188 [2024-04-17 10:16:42.448671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.188 [2024-04-17 10:16:42.459561] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.188 [2024-04-17 10:16:42.459588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.188 [2024-04-17 10:16:42.475785] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.188 [2024-04-17 10:16:42.475809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.188 [2024-04-17 10:16:42.485829] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.188 [2024-04-17 10:16:42.485853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.188 [2024-04-17 10:16:42.496684] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.188 [2024-04-17 10:16:42.496706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.188 [2024-04-17 10:16:42.509561] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.188 [2024-04-17 10:16:42.509584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.188 [2024-04-17 10:16:42.519618] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.188 [2024-04-17 10:16:42.519651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.446 [2024-04-17 10:16:42.534599] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.446 [2024-04-17 10:16:42.534629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.446 [2024-04-17 10:16:42.544694] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.446 [2024-04-17 10:16:42.544719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.446 [2024-04-17 10:16:42.555454] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.446 [2024-04-17 10:16:42.555479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.446 [2024-04-17 10:16:42.568133] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.446 [2024-04-17 10:16:42.568157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.446 [2024-04-17 10:16:42.577843] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.446 [2024-04-17 10:16:42.577866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.446 [2024-04-17 10:16:42.592461] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.446 [2024-04-17 10:16:42.592485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.446 [2024-04-17 10:16:42.602252] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.446 [2024-04-17 10:16:42.602276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.446 [2024-04-17 10:16:42.613494] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.446 [2024-04-17 10:16:42.613518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.446 [2024-04-17 10:16:42.624344] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.446 [2024-04-17 10:16:42.624367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.446 [2024-04-17 10:16:42.637348] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.446 [2024-04-17 10:16:42.637372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.446 [2024-04-17 10:16:42.653215] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.446 [2024-04-17 10:16:42.653238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.446 [2024-04-17 10:16:42.663513] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.446 [2024-04-17 10:16:42.663536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.446 [2024-04-17 10:16:42.674047] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.446 [2024-04-17 10:16:42.674070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.446 [2024-04-17 10:16:42.684955] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.446 [2024-04-17 10:16:42.684978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.446 [2024-04-17 10:16:42.698145] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.446 [2024-04-17 10:16:42.698167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.447 [2024-04-17 10:16:42.713354] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.447 [2024-04-17 10:16:42.713378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.447 [2024-04-17 10:16:42.722731] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.447 [2024-04-17 10:16:42.722753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.447 [2024-04-17 10:16:42.734415] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.447 [2024-04-17 10:16:42.734438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.447 [2024-04-17 10:16:42.745274] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.447 [2024-04-17 10:16:42.745298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.447 [2024-04-17 10:16:42.756319] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.447 [2024-04-17 10:16:42.756348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.447 [2024-04-17 10:16:42.769229] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.447 [2024-04-17 10:16:42.769253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.447 [2024-04-17 10:16:42.779043] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.447 [2024-04-17 10:16:42.779070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.705 [2024-04-17 10:16:42.789764] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.705 [2024-04-17 10:16:42.789789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.705 [2024-04-17 10:16:42.800762] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.705 [2024-04-17 10:16:42.800786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.705 [2024-04-17 10:16:42.811722] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.705 [2024-04-17 10:16:42.811744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.705 [2024-04-17 10:16:42.824494] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.705 [2024-04-17 10:16:42.824517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.705 [2024-04-17 10:16:42.834250] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.705 [2024-04-17 10:16:42.834274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.705 [2024-04-17 10:16:42.845587] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.705 [2024-04-17 10:16:42.845610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.705 [2024-04-17 10:16:42.856163] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.705 [2024-04-17 10:16:42.856187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.705 [2024-04-17 10:16:42.866634] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.705 [2024-04-17 10:16:42.866668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.705 [2024-04-17 10:16:42.881683] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.705 [2024-04-17 10:16:42.881706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.705 [2024-04-17 10:16:42.891664] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.705 [2024-04-17 10:16:42.891688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.705 [2024-04-17 10:16:42.902430] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.705 [2024-04-17 10:16:42.902453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.705 [2024-04-17 10:16:42.913223] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.705 [2024-04-17 10:16:42.913246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.705 [2024-04-17 10:16:42.926853] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.705 [2024-04-17 10:16:42.926877] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.705 [2024-04-17 10:16:42.942289] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.705 [2024-04-17 10:16:42.942312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.705 [2024-04-17 10:16:42.952199] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.705 [2024-04-17 10:16:42.952222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.705 [2024-04-17 10:16:42.962863] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.705 [2024-04-17 10:16:42.962886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.705 [2024-04-17 10:16:42.973705] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.705 [2024-04-17 10:16:42.973733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.705 [2024-04-17 10:16:42.986419] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.705 [2024-04-17 10:16:42.986442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.705 [2024-04-17 10:16:43.003155] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.705 [2024-04-17 10:16:43.003178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.705 [2024-04-17 10:16:43.012881] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.705 [2024-04-17 10:16:43.012904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.705 [2024-04-17 10:16:43.024188] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.705 [2024-04-17 10:16:43.024210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.705 [2024-04-17 10:16:43.037112] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.705 [2024-04-17 10:16:43.037141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.963 [2024-04-17 10:16:43.047319] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.963 [2024-04-17 10:16:43.047345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.963 [2024-04-17 10:16:43.062390] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.963 [2024-04-17 10:16:43.062414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.963 [2024-04-17 10:16:43.072468] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.963 [2024-04-17 10:16:43.072491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.963 [2024-04-17 10:16:43.083309] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.963 [2024-04-17 10:16:43.083331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.963 [2024-04-17 10:16:43.096244] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.963 [2024-04-17 10:16:43.096267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.963 [2024-04-17 10:16:43.106295] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.963 [2024-04-17 10:16:43.106319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.963 [2024-04-17 10:16:43.116924] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.963 [2024-04-17 10:16:43.116946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.963 [2024-04-17 10:16:43.129867] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.963 [2024-04-17 10:16:43.129890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.963 [2024-04-17 10:16:43.139684] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.963 [2024-04-17 10:16:43.139707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.963 [2024-04-17 10:16:43.151114] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.963 [2024-04-17 10:16:43.151137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.963 [2024-04-17 10:16:43.162046] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.963 [2024-04-17 10:16:43.162069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.963 [2024-04-17 10:16:43.176844] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.963 [2024-04-17 10:16:43.176867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.963 [2024-04-17 10:16:43.186371] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.963 [2024-04-17 10:16:43.186392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.963 [2024-04-17 10:16:43.198026] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.963 [2024-04-17 10:16:43.198054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.963 [2024-04-17 10:16:43.208702] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.963 [2024-04-17 10:16:43.208725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.963 [2024-04-17 10:16:43.219266] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.963 [2024-04-17 10:16:43.219289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.963 [2024-04-17 10:16:43.233941] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.963 [2024-04-17 10:16:43.233964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.963 [2024-04-17 10:16:43.243784] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.963 [2024-04-17 10:16:43.243807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.963 [2024-04-17 10:16:43.254616] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.963 [2024-04-17 10:16:43.254638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.963 [2024-04-17 10:16:43.265726] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.963 [2024-04-17 10:16:43.265749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.963 [2024-04-17 10:16:43.278811] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.963 [2024-04-17 10:16:43.278834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.963 [2024-04-17 10:16:43.295573] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.963 [2024-04-17 10:16:43.295605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.221 [2024-04-17 10:16:43.306048] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.221 [2024-04-17 10:16:43.306074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.221 [2024-04-17 10:16:43.316748] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.221 [2024-04-17 10:16:43.316774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.221 [2024-04-17 10:16:43.327249] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.221 [2024-04-17 10:16:43.327272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.221 [2024-04-17 10:16:43.337965] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.221 [2024-04-17 10:16:43.337989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.221 [2024-04-17 10:16:43.350652] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.221 [2024-04-17 10:16:43.350676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.221 [2024-04-17 10:16:43.360386] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.221 [2024-04-17 10:16:43.360409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.221 [2024-04-17 10:16:43.371560] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.221 [2024-04-17 10:16:43.371584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.221 [2024-04-17 10:16:43.382423] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.221 [2024-04-17 10:16:43.382445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.221 [2024-04-17 10:16:43.395392] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.221 [2024-04-17 10:16:43.395414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.221 [2024-04-17 10:16:43.413223] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.221 [2024-04-17 10:16:43.413247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.221 [2024-04-17 10:16:43.423707] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.221 [2024-04-17 10:16:43.423731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.221 [2024-04-17 10:16:43.434250] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.221 [2024-04-17 10:16:43.434273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.221 [2024-04-17 10:16:43.444900] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.221 [2024-04-17 10:16:43.444923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.221 [2024-04-17 10:16:43.455564] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.221 [2024-04-17 10:16:43.455587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.221 [2024-04-17 10:16:43.470078] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.221 [2024-04-17 10:16:43.470101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.221 [2024-04-17 10:16:43.479883] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.221 [2024-04-17 10:16:43.479905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.221 [2024-04-17 10:16:43.490991] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.222 [2024-04-17 10:16:43.491013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.222 [2024-04-17 10:16:43.501888] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.222 [2024-04-17 10:16:43.501911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.222 [2024-04-17 10:16:43.512313] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.222 [2024-04-17 10:16:43.512336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.222 [2024-04-17 10:16:43.527718] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.222 [2024-04-17 10:16:43.527741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.222 [2024-04-17 10:16:43.537836] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.222 [2024-04-17 10:16:43.537858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.222 [2024-04-17 10:16:43.548580] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.222 [2024-04-17 10:16:43.548602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.480 [2024-04-17 10:16:43.561898] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.480 [2024-04-17 10:16:43.561923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.480 [2024-04-17 10:16:43.572370] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.480 [2024-04-17 10:16:43.572393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.480 [2024-04-17 10:16:43.587550] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.480 [2024-04-17 10:16:43.587573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.480 [2024-04-17 10:16:43.597899] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.480 [2024-04-17 10:16:43.597922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.480 [2024-04-17 10:16:43.608343] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.480 [2024-04-17 10:16:43.608366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.480 [2024-04-17 10:16:43.619025] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.480 [2024-04-17 10:16:43.619048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.480 [2024-04-17 10:16:43.629858] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.480 [2024-04-17 10:16:43.629881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.480 [2024-04-17 10:16:43.644515] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.480 [2024-04-17 10:16:43.644539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.480 [2024-04-17 10:16:43.654501] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.480 [2024-04-17 10:16:43.654524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.480 [2024-04-17 10:16:43.665335] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.480 [2024-04-17 10:16:43.665358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.480 [2024-04-17 10:16:43.676389] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.480 [2024-04-17 10:16:43.676412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.480 [2024-04-17 10:16:43.689310] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.480 [2024-04-17 10:16:43.689332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.480 [2024-04-17 10:16:43.701780] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.480 [2024-04-17 10:16:43.701803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.480 [2024-04-17 10:16:43.710381] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.480 [2024-04-17 10:16:43.710404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.480 [2024-04-17 10:16:43.723869] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.480 [2024-04-17 10:16:43.723892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.480 [2024-04-17 10:16:43.734347] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.480 [2024-04-17 10:16:43.734369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.480 [2024-04-17 10:16:43.745199] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.480 [2024-04-17 10:16:43.745221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.480 [2024-04-17 10:16:43.762044] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.480 [2024-04-17 10:16:43.762067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.480 [2024-04-17 10:16:43.771778] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.480 [2024-04-17 10:16:43.771800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.480 [2024-04-17 10:16:43.782575] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.480 [2024-04-17 10:16:43.782598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.480 [2024-04-17 10:16:43.795782] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.480 [2024-04-17 10:16:43.795805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.480 [2024-04-17 10:16:43.805941] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.480 [2024-04-17 10:16:43.805964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.738 [2024-04-17 10:16:43.820580] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.738 [2024-04-17 10:16:43.820606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.738 [2024-04-17 10:16:43.830313] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.738 [2024-04-17 10:16:43.830337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.738 [2024-04-17 10:16:43.841697] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.738 [2024-04-17 10:16:43.841720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.738 [2024-04-17 10:16:43.852349] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.738 [2024-04-17 10:16:43.852373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.738 [2024-04-17 10:16:43.863248] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.738 [2024-04-17 10:16:43.863271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.738 [2024-04-17 10:16:43.876708] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.738 [2024-04-17 10:16:43.876731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.738 [2024-04-17 10:16:43.887258] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.738 [2024-04-17 10:16:43.887281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.738 [2024-04-17 10:16:43.897930] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.738 [2024-04-17 10:16:43.897952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.738 [2024-04-17 10:16:43.909374] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.738 [2024-04-17 10:16:43.909397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.738 [2024-04-17 10:16:43.920057] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.738 [2024-04-17 10:16:43.920081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.738 [2024-04-17 10:16:43.935171] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.738 [2024-04-17 10:16:43.935194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.738 [2024-04-17 10:16:43.945110] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.738 [2024-04-17 10:16:43.945132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.738 [2024-04-17 10:16:43.956240] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.738 [2024-04-17 10:16:43.956263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.738 [2024-04-17 10:16:43.969090] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.738 [2024-04-17 10:16:43.969113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.738 [2024-04-17 10:16:43.978938] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.738 [2024-04-17 10:16:43.978963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.738 [2024-04-17 10:16:43.993308] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.738 [2024-04-17 10:16:43.993337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.738 [2024-04-17 10:16:44.002803] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.738 [2024-04-17 10:16:44.002827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.738 [2024-04-17 10:16:44.014661] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.738 [2024-04-17 10:16:44.014685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.738 [2024-04-17 10:16:44.027704] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.738 [2024-04-17 10:16:44.027728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.738 [2024-04-17 10:16:44.038534] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.738 [2024-04-17 10:16:44.038558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.738 [2024-04-17 10:16:44.053067] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.738 [2024-04-17 10:16:44.053090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.738 [2024-04-17 10:16:44.062186] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.738 [2024-04-17 10:16:44.062209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.996 [2024-04-17 10:16:44.073931] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.996 [2024-04-17 10:16:44.073961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.996 [2024-04-17 10:16:44.086683] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.996 [2024-04-17 10:16:44.086706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.996 [2024-04-17 10:16:44.096549] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.996 [2024-04-17 10:16:44.096573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.996 [2024-04-17 10:16:44.111513] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.996 [2024-04-17 10:16:44.111536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.996 [2024-04-17 10:16:44.122107] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.996 [2024-04-17 10:16:44.122130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.996 [2024-04-17 10:16:44.132713] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.996 [2024-04-17 10:16:44.132735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.996 [2024-04-17 10:16:44.143654] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.996 [2024-04-17 10:16:44.143678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.996 [2024-04-17 10:16:44.154489] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.996 [2024-04-17 10:16:44.154512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.996 [2024-04-17 10:16:44.170766] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.996 [2024-04-17 10:16:44.170791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.996 [2024-04-17 10:16:44.180751] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.996 [2024-04-17 10:16:44.180774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.996 [2024-04-17 10:16:44.191657] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.996 [2024-04-17 10:16:44.191681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.996 [2024-04-17 10:16:44.202466] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.996 [2024-04-17 10:16:44.202491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.996 [2024-04-17 10:16:44.213132] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.996 [2024-04-17 10:16:44.213155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.996 [2024-04-17 10:16:44.223957] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.996 [2024-04-17 10:16:44.223980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.996 [2024-04-17 10:16:44.234751] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.996 [2024-04-17 10:16:44.234775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.996 [2024-04-17 10:16:44.247341] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.996 [2024-04-17 10:16:44.247365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.996 [2024-04-17 10:16:44.257444] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.996 [2024-04-17 10:16:44.257467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.996 [2024-04-17 10:16:44.268544] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.997 [2024-04-17 10:16:44.268568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.997 [2024-04-17 10:16:44.279421] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.997 [2024-04-17 10:16:44.279445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.997 [2024-04-17 10:16:44.290197] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.997 [2024-04-17 10:16:44.290225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.997 [2024-04-17 10:16:44.303151] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.997 [2024-04-17 10:16:44.303174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.997 [2024-04-17 10:16:44.313259] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.997 [2024-04-17 10:16:44.313282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.997 [2024-04-17 10:16:44.324010] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.997 [2024-04-17 10:16:44.324032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.255 [2024-04-17 10:16:44.337035] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.255 [2024-04-17 10:16:44.337061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.255 [2024-04-17 10:16:44.347202] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.255 [2024-04-17 10:16:44.347227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.255 [2024-04-17 10:16:44.358713] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.255 [2024-04-17 10:16:44.358737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.255 [2024-04-17 10:16:44.369896] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.255 [2024-04-17 10:16:44.369918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.255 [2024-04-17 10:16:44.383057] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.255 [2024-04-17 10:16:44.383080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.255 [2024-04-17 10:16:44.399413] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.255 [2024-04-17 10:16:44.399437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.255 [2024-04-17 10:16:44.409595] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.255 [2024-04-17 10:16:44.409618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.255 [2024-04-17 10:16:44.420603] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.255 [2024-04-17 10:16:44.420625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.255 [2024-04-17 10:16:44.431691] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.255 [2024-04-17 10:16:44.431715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.255 [2024-04-17 10:16:44.442779] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.255 [2024-04-17 10:16:44.442811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.255 [2024-04-17 10:16:44.458583] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.255 [2024-04-17 10:16:44.458607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.255 [2024-04-17 10:16:44.467604] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.255 [2024-04-17 10:16:44.467627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.255 [2024-04-17 10:16:44.480929] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.255 [2024-04-17 10:16:44.480951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.255 [2024-04-17 10:16:44.491274] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.255 [2024-04-17 10:16:44.491297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.255 [2024-04-17 10:16:44.505937] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.255 [2024-04-17 10:16:44.505962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.255 [2024-04-17 10:16:44.516346] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.255 [2024-04-17 10:16:44.516375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.255 [2024-04-17 10:16:44.527159] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.255 [2024-04-17 10:16:44.527180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.255 [2024-04-17 10:16:44.538264] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.255 [2024-04-17 10:16:44.538287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.255 [2024-04-17 10:16:44.549395] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.255 [2024-04-17 10:16:44.549418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.255 [2024-04-17 10:16:44.562309] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.255 [2024-04-17 10:16:44.562332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.255 [2024-04-17 10:16:44.579311] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.255 [2024-04-17 10:16:44.579340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.514 [2024-04-17 10:16:44.589695] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.514 [2024-04-17 10:16:44.589724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.514 [2024-04-17 10:16:44.601110] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.514 [2024-04-17 10:16:44.601135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.514 [2024-04-17 10:16:44.614069] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.514 [2024-04-17 10:16:44.614092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.514 [2024-04-17 10:16:44.624718] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.514 [2024-04-17 10:16:44.624740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.514 [2024-04-17 10:16:44.639397] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.514 [2024-04-17 10:16:44.639421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.514 [2024-04-17 10:16:44.649540] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.514 [2024-04-17 10:16:44.649563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.514 [2024-04-17 10:16:44.660490] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.514 [2024-04-17 10:16:44.660513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.514 [2024-04-17 10:16:44.671767] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.514 [2024-04-17 10:16:44.671789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.514 [2024-04-17 10:16:44.682511] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.514 [2024-04-17 10:16:44.682532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.514 [2024-04-17 10:16:44.707640] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.514 [2024-04-17 10:16:44.707669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.514 [2024-04-17 10:16:44.717561] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.514 [2024-04-17 10:16:44.717584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.514 [2024-04-17 10:16:44.728172] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.514 [2024-04-17 10:16:44.728194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.514 [2024-04-17 10:16:44.739108] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.514 [2024-04-17 10:16:44.739132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.514 [2024-04-17 10:16:44.753845] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.514 [2024-04-17 10:16:44.753874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.514 [2024-04-17 10:16:44.764376] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.514 [2024-04-17 10:16:44.764399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.514 [2024-04-17 10:16:44.775257] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.514 [2024-04-17 10:16:44.775278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.514 [2024-04-17 10:16:44.788419] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.514 [2024-04-17 10:16:44.788442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.514 [2024-04-17 10:16:44.798734] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.514 [2024-04-17 10:16:44.798757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.514 [2024-04-17 10:16:44.813884] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.514 [2024-04-17 10:16:44.813907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.514 [2024-04-17 10:16:44.823860] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.514 [2024-04-17 10:16:44.823882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.514 [2024-04-17 10:16:44.834707] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.514 [2024-04-17 10:16:44.834729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 [2024-04-17 10:16:44.847603] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:44.847627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 [2024-04-17 10:16:44.857231] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:44.857256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 [2024-04-17 10:16:44.872275] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:44.872299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 [2024-04-17 10:16:44.882435] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:44.882458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 [2024-04-17 10:16:44.893423] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:44.893446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 [2024-04-17 10:16:44.904348] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:44.904371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 [2024-04-17 10:16:44.915261] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:44.915285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 [2024-04-17 10:16:44.925952] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:44.925975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 [2024-04-17 10:16:44.936841] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:44.936865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 [2024-04-17 10:16:44.946395] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:44.946419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 00:20:11.773 Latency(us) 00:20:11.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.773 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:20:11.773 Nvme1n1 : 5.01 11691.87 91.34 0.00 0.00 10934.99 5064.15 23712.12 00:20:11.773 =================================================================================================================== 00:20:11.773 Total : 11691.87 91.34 0.00 0.00 10934.99 5064.15 23712.12 00:20:11.773 [2024-04-17 10:16:44.952946] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:44.952964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 [2024-04-17 10:16:44.960968] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:44.960987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 [2024-04-17 10:16:44.973005] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:44.973023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 [2024-04-17 10:16:44.981026] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:44.981044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 [2024-04-17 10:16:44.989047] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:44.989066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 [2024-04-17 10:16:44.997072] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:44.997090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 [2024-04-17 10:16:45.005093] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:45.005108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 [2024-04-17 10:16:45.017125] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:45.017140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 [2024-04-17 10:16:45.025146] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:45.025161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 [2024-04-17 10:16:45.033169] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:45.033184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 [2024-04-17 10:16:45.041190] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:45.041205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 [2024-04-17 10:16:45.049210] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:45.049225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 [2024-04-17 10:16:45.061248] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:45.061263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 [2024-04-17 10:16:45.069264] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:45.069276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 [2024-04-17 10:16:45.077288] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:45.077300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 [2024-04-17 10:16:45.085308] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:45.085320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 [2024-04-17 10:16:45.093331] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:45.093346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.773 [2024-04-17 10:16:45.105394] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.773 [2024-04-17 10:16:45.105426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.032 [2024-04-17 10:16:45.113396] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.032 [2024-04-17 10:16:45.113414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.032 [2024-04-17 10:16:45.121410] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.032 [2024-04-17 10:16:45.121424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.032 [2024-04-17 10:16:45.129431] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.032 [2024-04-17 10:16:45.129443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.032 [2024-04-17 10:16:45.137455] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.032 [2024-04-17 10:16:45.137469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.032 [2024-04-17 10:16:45.149490] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.032 [2024-04-17 10:16:45.149505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.032 [2024-04-17 10:16:45.157507] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.032 [2024-04-17 10:16:45.157519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.032 [2024-04-17 10:16:45.165530] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.032 [2024-04-17 10:16:45.165542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3459296) - No such process 00:20:12.032 10:16:45 -- target/zcopy.sh@49 -- # wait 3459296 00:20:12.032 10:16:45 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:12.032 10:16:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:12.032 10:16:45 -- common/autotest_common.sh@10 -- # set +x 00:20:12.032 10:16:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:12.032 10:16:45 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:20:12.032 10:16:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:12.032 10:16:45 -- common/autotest_common.sh@10 -- # set +x 00:20:12.032 delay0 00:20:12.032 10:16:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:12.032 10:16:45 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:20:12.032 10:16:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:12.032 10:16:45 -- common/autotest_common.sh@10 -- # set +x 00:20:12.032 10:16:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:12.032 10:16:45 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:20:12.032 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.032 [2024-04-17 10:16:45.345821] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:20:18.589 [2024-04-17 10:16:51.494103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1869790 is same with the state(5) to be set 00:20:18.589 Initializing NVMe Controllers 00:20:18.589 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:18.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:18.589 Initialization complete. Launching workers. 00:20:18.589 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 126 00:20:18.589 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 399, failed to submit 47 00:20:18.589 success 228, unsuccess 171, failed 0 00:20:18.589 10:16:51 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:20:18.589 10:16:51 -- target/zcopy.sh@60 -- # nvmftestfini 00:20:18.589 10:16:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:18.589 10:16:51 -- nvmf/common.sh@116 -- # sync 00:20:18.589 10:16:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:18.589 10:16:51 -- nvmf/common.sh@119 -- # set +e 00:20:18.589 10:16:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:18.589 10:16:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:18.589 rmmod nvme_tcp 00:20:18.589 rmmod nvme_fabrics 00:20:18.589 rmmod nvme_keyring 00:20:18.589 10:16:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:18.589 10:16:51 -- nvmf/common.sh@123 -- # set -e 00:20:18.589 10:16:51 -- nvmf/common.sh@124 -- # return 0 00:20:18.589 10:16:51 -- nvmf/common.sh@477 -- # '[' -n 3457153 ']' 00:20:18.589 10:16:51 -- nvmf/common.sh@478 -- # killprocess 3457153 00:20:18.589 10:16:51 -- common/autotest_common.sh@926 -- # '[' -z 3457153 ']' 00:20:18.589 10:16:51 -- common/autotest_common.sh@930 -- # kill -0 3457153 00:20:18.589 10:16:51 -- common/autotest_common.sh@931 -- # uname 00:20:18.589 10:16:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:18.589 10:16:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3457153 00:20:18.589 10:16:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:18.589 10:16:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:18.589 10:16:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3457153' 00:20:18.589 killing process with pid 3457153 00:20:18.589 10:16:51 -- common/autotest_common.sh@945 -- # kill 3457153 00:20:18.589 10:16:51 -- common/autotest_common.sh@950 -- # wait 3457153 00:20:18.589 10:16:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:18.589 10:16:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:18.589 10:16:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:18.589 10:16:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:18.589 10:16:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:18.589 10:16:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.589 10:16:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:18.589 10:16:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.121 10:16:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:21.121 00:20:21.121 real 0m32.092s 00:20:21.121 user 0m44.025s 00:20:21.121 sys 0m10.068s 00:20:21.121 10:16:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:21.121 10:16:53 -- common/autotest_common.sh@10 -- # set +x 00:20:21.121 ************************************ 00:20:21.121 END TEST nvmf_zcopy 00:20:21.121 ************************************ 00:20:21.121 10:16:53 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:20:21.121 10:16:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:21.121 10:16:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:21.121 10:16:53 -- common/autotest_common.sh@10 -- # set +x 00:20:21.121 ************************************ 00:20:21.121 START TEST nvmf_nmic 00:20:21.121 ************************************ 00:20:21.121 10:16:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:20:21.121 * Looking for test storage... 00:20:21.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:21.121 10:16:54 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:21.121 10:16:54 -- nvmf/common.sh@7 -- # uname -s 00:20:21.121 10:16:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.121 10:16:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.121 10:16:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.121 10:16:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.121 10:16:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.121 10:16:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.121 10:16:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.121 10:16:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.121 10:16:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.121 10:16:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.122 10:16:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:21.122 10:16:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:20:21.122 10:16:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.122 10:16:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.122 10:16:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:21.122 10:16:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:21.122 10:16:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.122 10:16:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.122 10:16:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.122 10:16:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.122 10:16:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.122 10:16:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.122 10:16:54 -- paths/export.sh@5 -- # export PATH 00:20:21.122 10:16:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.122 10:16:54 -- nvmf/common.sh@46 -- # : 0 00:20:21.122 10:16:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:21.122 10:16:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:21.122 10:16:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:21.122 10:16:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.122 10:16:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.122 10:16:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:21.122 10:16:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:21.122 10:16:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:21.122 10:16:54 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:21.122 10:16:54 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:21.122 10:16:54 -- target/nmic.sh@14 -- # nvmftestinit 00:20:21.122 10:16:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:21.122 10:16:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.122 10:16:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:21.122 10:16:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:21.122 10:16:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:21.122 10:16:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.122 10:16:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:21.122 10:16:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.122 10:16:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:21.122 10:16:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:21.122 10:16:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:21.122 10:16:54 -- common/autotest_common.sh@10 -- # set +x 00:20:26.383 10:16:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:26.383 10:16:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:26.383 10:16:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:26.383 10:16:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:26.383 10:16:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:26.383 10:16:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:26.383 10:16:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:26.383 10:16:59 -- nvmf/common.sh@294 -- # net_devs=() 00:20:26.383 10:16:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:26.383 10:16:59 -- nvmf/common.sh@295 -- # e810=() 00:20:26.383 10:16:59 -- nvmf/common.sh@295 -- # local -ga e810 00:20:26.383 10:16:59 -- nvmf/common.sh@296 -- # x722=() 00:20:26.383 10:16:59 -- nvmf/common.sh@296 -- # local -ga x722 00:20:26.383 10:16:59 -- nvmf/common.sh@297 -- # mlx=() 00:20:26.383 10:16:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:26.383 10:16:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.383 10:16:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.383 10:16:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.383 10:16:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.383 10:16:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.383 10:16:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.383 10:16:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.383 10:16:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.383 10:16:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.383 10:16:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.383 10:16:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.383 10:16:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:26.383 10:16:59 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:26.383 10:16:59 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:26.383 10:16:59 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:26.383 10:16:59 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:26.383 10:16:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:26.383 10:16:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:26.383 10:16:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:26.383 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:26.383 10:16:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:26.383 10:16:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:26.383 10:16:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.383 10:16:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.383 10:16:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:26.383 10:16:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:26.383 10:16:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:26.383 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:26.383 10:16:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:26.383 10:16:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:26.383 10:16:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.383 10:16:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.383 10:16:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:26.383 10:16:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:26.383 10:16:59 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:26.383 10:16:59 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:26.383 10:16:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:26.383 10:16:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.383 10:16:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:26.383 10:16:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.383 10:16:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:26.383 Found net devices under 0000:af:00.0: cvl_0_0 00:20:26.383 10:16:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.383 10:16:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:26.383 10:16:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.383 10:16:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:26.383 10:16:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.383 10:16:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:26.383 Found net devices under 0000:af:00.1: cvl_0_1 00:20:26.383 10:16:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.383 10:16:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:26.383 10:16:59 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:26.383 10:16:59 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:26.383 10:16:59 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:26.383 10:16:59 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:26.383 10:16:59 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.383 10:16:59 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.383 10:16:59 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.383 10:16:59 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:26.383 10:16:59 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:26.383 10:16:59 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:26.383 10:16:59 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:26.383 10:16:59 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:26.383 10:16:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.383 10:16:59 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:26.383 10:16:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:26.383 10:16:59 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:26.383 10:16:59 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:26.641 10:16:59 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:26.641 10:16:59 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.641 10:16:59 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:26.641 10:16:59 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.641 10:16:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.641 10:16:59 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.641 10:16:59 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:26.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:20:26.641 00:20:26.641 --- 10.0.0.2 ping statistics --- 00:20:26.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.641 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:20:26.641 10:16:59 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:20:26.641 00:20:26.641 --- 10.0.0.1 ping statistics --- 00:20:26.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.641 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:20:26.641 10:16:59 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.641 10:16:59 -- nvmf/common.sh@410 -- # return 0 00:20:26.641 10:16:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:26.641 10:16:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.641 10:16:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:26.641 10:16:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:26.641 10:16:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.641 10:16:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:26.641 10:16:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:26.641 10:16:59 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:20:26.641 10:16:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:26.641 10:16:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:26.641 10:16:59 -- common/autotest_common.sh@10 -- # set +x 00:20:26.641 10:16:59 -- nvmf/common.sh@469 -- # nvmfpid=3465156 00:20:26.641 10:16:59 -- nvmf/common.sh@470 -- # waitforlisten 3465156 00:20:26.641 10:16:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:26.641 10:16:59 -- common/autotest_common.sh@819 -- # '[' -z 3465156 ']' 00:20:26.641 10:16:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.641 10:16:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:26.641 10:16:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.641 10:16:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:26.641 10:16:59 -- common/autotest_common.sh@10 -- # set +x 00:20:26.641 [2024-04-17 10:16:59.938222] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:26.641 [2024-04-17 10:16:59.938276] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.899 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.899 [2024-04-17 10:17:00.025744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:26.899 [2024-04-17 10:17:00.124348] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:26.899 [2024-04-17 10:17:00.124488] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.899 [2024-04-17 10:17:00.124499] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.899 [2024-04-17 10:17:00.124509] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.899 [2024-04-17 10:17:00.124551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.899 [2024-04-17 10:17:00.124575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.899 [2024-04-17 10:17:00.124695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:26.899 [2024-04-17 10:17:00.124695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.832 10:17:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:27.832 10:17:00 -- common/autotest_common.sh@852 -- # return 0 00:20:27.832 10:17:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:27.832 10:17:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:27.832 10:17:00 -- common/autotest_common.sh@10 -- # set +x 00:20:27.832 10:17:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.832 10:17:00 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:27.832 10:17:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:27.832 10:17:00 -- common/autotest_common.sh@10 -- # set +x 00:20:27.832 [2024-04-17 10:17:00.923408] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.832 10:17:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:27.832 10:17:00 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:27.832 10:17:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:27.832 10:17:00 -- common/autotest_common.sh@10 -- # set +x 00:20:27.832 Malloc0 00:20:27.832 10:17:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:27.832 10:17:00 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:27.832 10:17:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:27.832 10:17:00 -- common/autotest_common.sh@10 -- # set +x 00:20:27.832 10:17:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:27.832 10:17:00 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:27.832 10:17:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:27.832 10:17:00 -- common/autotest_common.sh@10 -- # set +x 00:20:27.832 10:17:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:27.832 10:17:00 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:27.832 10:17:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:27.832 10:17:00 -- common/autotest_common.sh@10 -- # set +x 00:20:27.832 [2024-04-17 10:17:00.979517] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.832 10:17:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:27.832 10:17:00 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:20:27.832 test case1: single bdev can't be used in multiple subsystems 00:20:27.832 10:17:00 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:27.832 10:17:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:27.832 10:17:00 -- common/autotest_common.sh@10 -- # set +x 00:20:27.832 10:17:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:27.832 10:17:00 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:27.832 10:17:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:27.832 10:17:00 -- common/autotest_common.sh@10 -- # set +x 00:20:27.832 10:17:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:27.832 10:17:00 -- target/nmic.sh@28 -- # nmic_status=0 00:20:27.832 10:17:00 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:20:27.832 10:17:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:27.832 10:17:01 -- common/autotest_common.sh@10 -- # set +x 00:20:27.832 [2024-04-17 10:17:01.007433] bdev.c:7935:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:20:27.832 [2024-04-17 10:17:01.007456] subsystem.c:1779:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:20:27.832 [2024-04-17 10:17:01.007466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:27.832 request: 00:20:27.832 { 00:20:27.832 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:20:27.832 "namespace": { 00:20:27.832 "bdev_name": "Malloc0" 00:20:27.832 }, 00:20:27.832 "method": "nvmf_subsystem_add_ns", 00:20:27.832 "req_id": 1 00:20:27.832 } 00:20:27.832 Got JSON-RPC error response 00:20:27.832 response: 00:20:27.832 { 00:20:27.832 "code": -32602, 00:20:27.832 "message": "Invalid parameters" 00:20:27.832 } 00:20:27.832 10:17:01 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:20:27.832 10:17:01 -- target/nmic.sh@29 -- # nmic_status=1 00:20:27.832 10:17:01 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:20:27.832 10:17:01 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:20:27.832 Adding namespace failed - expected result. 00:20:27.832 10:17:01 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:20:27.832 test case2: host connect to nvmf target in multiple paths 00:20:27.832 10:17:01 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:27.832 10:17:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:27.832 10:17:01 -- common/autotest_common.sh@10 -- # set +x 00:20:27.832 [2024-04-17 10:17:01.019591] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:27.832 10:17:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:27.832 10:17:01 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:29.201 10:17:02 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:20:30.572 10:17:03 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:20:30.572 10:17:03 -- common/autotest_common.sh@1177 -- # local i=0 00:20:30.572 10:17:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:30.572 10:17:03 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:30.572 10:17:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:32.468 10:17:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:32.468 10:17:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:32.468 10:17:05 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:20:32.468 10:17:05 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:32.468 10:17:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:32.468 10:17:05 -- common/autotest_common.sh@1187 -- # return 0 00:20:32.468 10:17:05 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:32.468 [global] 00:20:32.468 thread=1 00:20:32.468 invalidate=1 00:20:32.468 rw=write 00:20:32.468 time_based=1 00:20:32.468 runtime=1 00:20:32.468 ioengine=libaio 00:20:32.468 direct=1 00:20:32.468 bs=4096 00:20:32.468 iodepth=1 00:20:32.468 norandommap=0 00:20:32.468 numjobs=1 00:20:32.468 00:20:32.468 verify_dump=1 00:20:32.468 verify_backlog=512 00:20:32.468 verify_state_save=0 00:20:32.468 do_verify=1 00:20:32.468 verify=crc32c-intel 00:20:32.468 [job0] 00:20:32.468 filename=/dev/nvme0n1 00:20:32.468 Could not set queue depth (nvme0n1) 00:20:33.032 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:33.032 fio-3.35 00:20:33.032 Starting 1 thread 00:20:33.963 00:20:33.963 job0: (groupid=0, jobs=1): err= 0: pid=3466395: Wed Apr 17 10:17:07 2024 00:20:33.963 read: IOPS=1004, BW=4019KiB/s (4116kB/s)(4140KiB/1030msec) 00:20:33.963 slat (nsec): min=6752, max=26649, avg=7826.51, stdev=1877.05 00:20:33.963 clat (usec): min=210, max=42004, avg=682.14, stdev=4249.93 00:20:33.963 lat (usec): min=217, max=42028, avg=689.97, stdev=4251.39 00:20:33.963 clat percentiles (usec): 00:20:33.963 | 1.00th=[ 221], 5.00th=[ 227], 10.00th=[ 229], 20.00th=[ 231], 00:20:33.963 | 30.00th=[ 233], 40.00th=[ 235], 50.00th=[ 237], 60.00th=[ 239], 00:20:33.963 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 269], 95.00th=[ 277], 00:20:33.963 | 99.00th=[40633], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:33.963 | 99.99th=[42206] 00:20:33.963 write: IOPS=1491, BW=5965KiB/s (6108kB/s)(6144KiB/1030msec); 0 zone resets 00:20:33.963 slat (usec): min=9, max=26609, avg=28.51, stdev=678.66 00:20:33.963 clat (usec): min=138, max=2907, avg=171.72, stdev=75.58 00:20:33.963 lat (usec): min=149, max=26926, avg=200.23, stdev=686.55 00:20:33.963 clat percentiles (usec): 00:20:33.963 | 1.00th=[ 143], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 149], 00:20:33.963 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 165], 00:20:33.963 | 70.00th=[ 184], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 212], 00:20:33.963 | 99.00th=[ 225], 99.50th=[ 314], 99.90th=[ 469], 99.95th=[ 2900], 00:20:33.963 | 99.99th=[ 2900] 00:20:33.963 bw ( KiB/s): min= 2952, max= 9336, per=100.00%, avg=6144.00, stdev=4514.17, samples=2 00:20:33.963 iops : min= 738, max= 2334, avg=1536.00, stdev=1128.54, samples=2 00:20:33.963 lat (usec) : 250=90.78%, 500=8.75% 00:20:33.963 lat (msec) : 4=0.04%, 50=0.43% 00:20:33.963 cpu : usr=0.58%, sys=3.21%, ctx=2576, majf=0, minf=2 00:20:33.963 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:33.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:33.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:33.963 issued rwts: total=1035,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:33.963 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:33.963 00:20:33.963 Run status group 0 (all jobs): 00:20:33.963 READ: bw=4019KiB/s (4116kB/s), 4019KiB/s-4019KiB/s (4116kB/s-4116kB/s), io=4140KiB (4239kB), run=1030-1030msec 00:20:33.963 WRITE: bw=5965KiB/s (6108kB/s), 5965KiB/s-5965KiB/s (6108kB/s-6108kB/s), io=6144KiB (6291kB), run=1030-1030msec 00:20:33.963 00:20:33.963 Disk stats (read/write): 00:20:33.963 nvme0n1: ios=1083/1536, merge=0/0, ticks=1312/259, in_queue=1571, util=98.80% 00:20:33.963 10:17:07 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:34.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:20:34.221 10:17:07 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:34.221 10:17:07 -- common/autotest_common.sh@1198 -- # local i=0 00:20:34.221 10:17:07 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:34.221 10:17:07 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:34.221 10:17:07 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:34.221 10:17:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:34.221 10:17:07 -- common/autotest_common.sh@1210 -- # return 0 00:20:34.221 10:17:07 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:34.221 10:17:07 -- target/nmic.sh@53 -- # nvmftestfini 00:20:34.221 10:17:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:34.221 10:17:07 -- nvmf/common.sh@116 -- # sync 00:20:34.221 10:17:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:34.221 10:17:07 -- nvmf/common.sh@119 -- # set +e 00:20:34.221 10:17:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:34.221 10:17:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:34.221 rmmod nvme_tcp 00:20:34.221 rmmod nvme_fabrics 00:20:34.221 rmmod nvme_keyring 00:20:34.221 10:17:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:34.221 10:17:07 -- nvmf/common.sh@123 -- # set -e 00:20:34.221 10:17:07 -- nvmf/common.sh@124 -- # return 0 00:20:34.221 10:17:07 -- nvmf/common.sh@477 -- # '[' -n 3465156 ']' 00:20:34.221 10:17:07 -- nvmf/common.sh@478 -- # killprocess 3465156 00:20:34.221 10:17:07 -- common/autotest_common.sh@926 -- # '[' -z 3465156 ']' 00:20:34.221 10:17:07 -- common/autotest_common.sh@930 -- # kill -0 3465156 00:20:34.221 10:17:07 -- common/autotest_common.sh@931 -- # uname 00:20:34.221 10:17:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:34.221 10:17:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3465156 00:20:34.221 10:17:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:34.221 10:17:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:34.221 10:17:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3465156' 00:20:34.221 killing process with pid 3465156 00:20:34.221 10:17:07 -- common/autotest_common.sh@945 -- # kill 3465156 00:20:34.221 10:17:07 -- common/autotest_common.sh@950 -- # wait 3465156 00:20:34.480 10:17:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:34.480 10:17:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:34.480 10:17:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:34.480 10:17:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:34.480 10:17:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:34.480 10:17:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.480 10:17:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:34.480 10:17:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.014 10:17:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:37.014 00:20:37.014 real 0m15.902s 00:20:37.014 user 0m43.803s 00:20:37.014 sys 0m5.235s 00:20:37.014 10:17:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:37.014 10:17:09 -- common/autotest_common.sh@10 -- # set +x 00:20:37.014 ************************************ 00:20:37.014 END TEST nvmf_nmic 00:20:37.014 ************************************ 00:20:37.014 10:17:09 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:37.014 10:17:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:37.014 10:17:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:37.014 10:17:09 -- common/autotest_common.sh@10 -- # set +x 00:20:37.014 ************************************ 00:20:37.014 START TEST nvmf_fio_target 00:20:37.014 ************************************ 00:20:37.014 10:17:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:37.014 * Looking for test storage... 00:20:37.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:37.014 10:17:09 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:37.014 10:17:09 -- nvmf/common.sh@7 -- # uname -s 00:20:37.014 10:17:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:37.014 10:17:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:37.014 10:17:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:37.014 10:17:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:37.014 10:17:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:37.014 10:17:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:37.014 10:17:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:37.014 10:17:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:37.014 10:17:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:37.014 10:17:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:37.014 10:17:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:37.014 10:17:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:20:37.014 10:17:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:37.014 10:17:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:37.014 10:17:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:37.014 10:17:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:37.014 10:17:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.014 10:17:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.014 10:17:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.014 10:17:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.014 10:17:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.014 10:17:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.014 10:17:10 -- paths/export.sh@5 -- # export PATH 00:20:37.014 10:17:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.014 10:17:10 -- nvmf/common.sh@46 -- # : 0 00:20:37.014 10:17:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:37.014 10:17:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:37.014 10:17:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:37.014 10:17:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:37.014 10:17:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:37.014 10:17:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:37.014 10:17:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:37.014 10:17:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:37.014 10:17:10 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:37.014 10:17:10 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:37.014 10:17:10 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:37.014 10:17:10 -- target/fio.sh@16 -- # nvmftestinit 00:20:37.014 10:17:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:37.014 10:17:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.014 10:17:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:37.014 10:17:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:37.014 10:17:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:37.014 10:17:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.014 10:17:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:37.014 10:17:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.014 10:17:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:37.014 10:17:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:37.014 10:17:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:37.014 10:17:10 -- common/autotest_common.sh@10 -- # set +x 00:20:42.308 10:17:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:42.308 10:17:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:42.308 10:17:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:42.308 10:17:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:42.308 10:17:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:42.308 10:17:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:42.308 10:17:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:42.308 10:17:15 -- nvmf/common.sh@294 -- # net_devs=() 00:20:42.308 10:17:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:42.308 10:17:15 -- nvmf/common.sh@295 -- # e810=() 00:20:42.308 10:17:15 -- nvmf/common.sh@295 -- # local -ga e810 00:20:42.308 10:17:15 -- nvmf/common.sh@296 -- # x722=() 00:20:42.308 10:17:15 -- nvmf/common.sh@296 -- # local -ga x722 00:20:42.308 10:17:15 -- nvmf/common.sh@297 -- # mlx=() 00:20:42.308 10:17:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:42.308 10:17:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:42.308 10:17:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:42.308 10:17:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:42.308 10:17:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:42.308 10:17:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:42.308 10:17:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:42.308 10:17:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:42.308 10:17:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:42.308 10:17:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:42.309 10:17:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:42.309 10:17:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:42.309 10:17:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:42.309 10:17:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:42.309 10:17:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:42.309 10:17:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:42.309 10:17:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:42.309 10:17:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:42.309 10:17:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:42.309 10:17:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:42.309 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:42.309 10:17:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:42.309 10:17:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:42.309 10:17:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.309 10:17:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.309 10:17:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:42.309 10:17:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:42.309 10:17:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:42.309 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:42.309 10:17:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:42.309 10:17:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:42.309 10:17:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.309 10:17:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.309 10:17:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:42.309 10:17:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:42.309 10:17:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:42.309 10:17:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:42.309 10:17:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:42.309 10:17:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.309 10:17:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:42.309 10:17:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.309 10:17:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:42.309 Found net devices under 0000:af:00.0: cvl_0_0 00:20:42.309 10:17:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.309 10:17:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:42.309 10:17:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.309 10:17:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:42.309 10:17:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.309 10:17:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:42.309 Found net devices under 0000:af:00.1: cvl_0_1 00:20:42.309 10:17:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.309 10:17:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:42.309 10:17:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:42.309 10:17:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:42.309 10:17:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:42.309 10:17:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:42.309 10:17:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:42.309 10:17:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:42.309 10:17:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:42.309 10:17:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:42.309 10:17:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:42.309 10:17:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:42.309 10:17:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:42.309 10:17:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:42.309 10:17:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:42.309 10:17:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:42.309 10:17:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:42.309 10:17:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:42.309 10:17:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:42.309 10:17:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:42.309 10:17:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:42.309 10:17:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:42.309 10:17:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:42.309 10:17:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:42.567 10:17:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:42.567 10:17:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:42.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:42.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:20:42.567 00:20:42.567 --- 10.0.0.2 ping statistics --- 00:20:42.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.567 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:20:42.568 10:17:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:42.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:42.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:20:42.568 00:20:42.568 --- 10.0.0.1 ping statistics --- 00:20:42.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.568 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:20:42.568 10:17:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:42.568 10:17:15 -- nvmf/common.sh@410 -- # return 0 00:20:42.568 10:17:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:42.568 10:17:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:42.568 10:17:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:42.568 10:17:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:42.568 10:17:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:42.568 10:17:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:42.568 10:17:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:42.568 10:17:15 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:20:42.568 10:17:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:42.568 10:17:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:42.568 10:17:15 -- common/autotest_common.sh@10 -- # set +x 00:20:42.568 10:17:15 -- nvmf/common.sh@469 -- # nvmfpid=3470400 00:20:42.568 10:17:15 -- nvmf/common.sh@470 -- # waitforlisten 3470400 00:20:42.568 10:17:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:42.568 10:17:15 -- common/autotest_common.sh@819 -- # '[' -z 3470400 ']' 00:20:42.568 10:17:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.568 10:17:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:42.568 10:17:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.568 10:17:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:42.568 10:17:15 -- common/autotest_common.sh@10 -- # set +x 00:20:42.568 [2024-04-17 10:17:15.758285] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:42.568 [2024-04-17 10:17:15.758344] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.568 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.568 [2024-04-17 10:17:15.843783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:42.826 [2024-04-17 10:17:15.933924] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:42.826 [2024-04-17 10:17:15.934065] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.826 [2024-04-17 10:17:15.934076] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.826 [2024-04-17 10:17:15.934085] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.826 [2024-04-17 10:17:15.934123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.826 [2024-04-17 10:17:15.934213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.826 [2024-04-17 10:17:15.934254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.826 [2024-04-17 10:17:15.934254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:43.390 10:17:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:43.390 10:17:16 -- common/autotest_common.sh@852 -- # return 0 00:20:43.390 10:17:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:43.390 10:17:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:43.390 10:17:16 -- common/autotest_common.sh@10 -- # set +x 00:20:43.648 10:17:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.648 10:17:16 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:43.648 [2024-04-17 10:17:16.890920] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.648 10:17:16 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:43.906 10:17:17 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:20:43.906 10:17:17 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:44.164 10:17:17 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:20:44.164 10:17:17 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:44.421 10:17:17 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:20:44.421 10:17:17 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:44.679 10:17:17 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:20:44.679 10:17:17 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:20:44.937 10:17:18 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:45.195 10:17:18 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:20:45.195 10:17:18 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:45.453 10:17:18 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:20:45.453 10:17:18 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:45.710 10:17:19 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:20:45.711 10:17:19 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:20:45.969 10:17:19 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:46.226 10:17:19 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:46.226 10:17:19 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:46.484 10:17:19 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:46.484 10:17:19 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:46.743 10:17:19 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:47.000 [2024-04-17 10:17:20.210522] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.000 10:17:20 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:20:47.258 10:17:20 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:20:47.516 10:17:20 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:48.888 10:17:22 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:20:48.888 10:17:22 -- common/autotest_common.sh@1177 -- # local i=0 00:20:48.888 10:17:22 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:48.888 10:17:22 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:20:48.888 10:17:22 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:20:48.888 10:17:22 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:50.786 10:17:24 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:50.786 10:17:24 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:50.786 10:17:24 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:20:50.786 10:17:24 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:20:50.786 10:17:24 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:50.786 10:17:24 -- common/autotest_common.sh@1187 -- # return 0 00:20:50.786 10:17:24 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:51.068 [global] 00:20:51.068 thread=1 00:20:51.068 invalidate=1 00:20:51.068 rw=write 00:20:51.068 time_based=1 00:20:51.068 runtime=1 00:20:51.068 ioengine=libaio 00:20:51.068 direct=1 00:20:51.068 bs=4096 00:20:51.068 iodepth=1 00:20:51.068 norandommap=0 00:20:51.068 numjobs=1 00:20:51.068 00:20:51.068 verify_dump=1 00:20:51.068 verify_backlog=512 00:20:51.068 verify_state_save=0 00:20:51.068 do_verify=1 00:20:51.068 verify=crc32c-intel 00:20:51.068 [job0] 00:20:51.068 filename=/dev/nvme0n1 00:20:51.068 [job1] 00:20:51.068 filename=/dev/nvme0n2 00:20:51.068 [job2] 00:20:51.068 filename=/dev/nvme0n3 00:20:51.068 [job3] 00:20:51.068 filename=/dev/nvme0n4 00:20:51.068 Could not set queue depth (nvme0n1) 00:20:51.068 Could not set queue depth (nvme0n2) 00:20:51.068 Could not set queue depth (nvme0n3) 00:20:51.068 Could not set queue depth (nvme0n4) 00:20:51.330 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:51.330 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:51.330 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:51.330 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:51.330 fio-3.35 00:20:51.330 Starting 4 threads 00:20:52.719 00:20:52.719 job0: (groupid=0, jobs=1): err= 0: pid=3472216: Wed Apr 17 10:17:25 2024 00:20:52.719 read: IOPS=20, BW=83.4KiB/s (85.4kB/s)(84.0KiB/1007msec) 00:20:52.719 slat (nsec): min=9573, max=23072, avg=21825.76, stdev=2841.92 00:20:52.719 clat (usec): min=40811, max=42115, avg=41301.86, stdev=502.18 00:20:52.719 lat (usec): min=40833, max=42136, avg=41323.69, stdev=502.67 00:20:52.719 clat percentiles (usec): 00:20:52.719 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:20:52.719 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:52.719 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:20:52.719 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:52.719 | 99.99th=[42206] 00:20:52.719 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:20:52.719 slat (usec): min=9, max=175, avg=10.61, stdev= 7.45 00:20:52.719 clat (usec): min=190, max=1255, avg=257.97, stdev=53.85 00:20:52.719 lat (usec): min=200, max=1265, avg=268.58, stdev=56.11 00:20:52.719 clat percentiles (usec): 00:20:52.719 | 1.00th=[ 198], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 231], 00:20:52.719 | 30.00th=[ 239], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 262], 00:20:52.719 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 306], 00:20:52.719 | 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 1254], 99.95th=[ 1254], 00:20:52.719 | 99.99th=[ 1254] 00:20:52.719 bw ( KiB/s): min= 4096, max= 4096, per=21.01%, avg=4096.00, stdev= 0.00, samples=1 00:20:52.719 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:52.719 lat (usec) : 250=41.84%, 500=53.85%, 750=0.19% 00:20:52.719 lat (msec) : 2=0.19%, 50=3.94% 00:20:52.719 cpu : usr=0.10%, sys=0.70%, ctx=534, majf=0, minf=2 00:20:52.719 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:52.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.719 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.719 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:52.719 job1: (groupid=0, jobs=1): err= 0: pid=3472217: Wed Apr 17 10:17:25 2024 00:20:52.719 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:20:52.719 slat (nsec): min=6496, max=25699, avg=7261.27, stdev=1004.78 00:20:52.719 clat (usec): min=264, max=649, avg=350.82, stdev=71.07 00:20:52.719 lat (usec): min=271, max=656, avg=358.08, stdev=71.11 00:20:52.719 clat percentiles (usec): 00:20:52.719 | 1.00th=[ 273], 5.00th=[ 285], 10.00th=[ 293], 20.00th=[ 297], 00:20:52.719 | 30.00th=[ 306], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 326], 00:20:52.719 | 70.00th=[ 347], 80.00th=[ 441], 90.00th=[ 469], 95.00th=[ 494], 00:20:52.719 | 99.00th=[ 510], 99.50th=[ 523], 99.90th=[ 644], 99.95th=[ 652], 00:20:52.719 | 99.99th=[ 652] 00:20:52.719 write: IOPS=1833, BW=7333KiB/s (7509kB/s)(7340KiB/1001msec); 0 zone resets 00:20:52.719 slat (usec): min=9, max=25464, avg=24.31, stdev=594.20 00:20:52.719 clat (usec): min=173, max=470, avg=216.89, stdev=20.50 00:20:52.719 lat (usec): min=183, max=25801, avg=241.20, stdev=597.36 00:20:52.719 clat percentiles (usec): 00:20:52.719 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 202], 00:20:52.719 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 221], 00:20:52.719 | 70.00th=[ 225], 80.00th=[ 231], 90.00th=[ 241], 95.00th=[ 249], 00:20:52.719 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 404], 99.95th=[ 469], 00:20:52.719 | 99.99th=[ 469] 00:20:52.719 bw ( KiB/s): min= 8192, max= 8192, per=42.03%, avg=8192.00, stdev= 0.00, samples=1 00:20:52.719 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:52.719 lat (usec) : 250=51.88%, 500=46.87%, 750=1.25% 00:20:52.719 cpu : usr=1.50%, sys=3.30%, ctx=3373, majf=0, minf=1 00:20:52.719 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:52.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.719 issued rwts: total=1536,1835,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.719 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:52.719 job2: (groupid=0, jobs=1): err= 0: pid=3472218: Wed Apr 17 10:17:25 2024 00:20:52.719 read: IOPS=585, BW=2342KiB/s (2398kB/s)(2344KiB/1001msec) 00:20:52.719 slat (nsec): min=7288, max=81816, avg=8580.67, stdev=3654.20 00:20:52.720 clat (usec): min=251, max=42096, avg=1250.07, stdev=5909.99 00:20:52.720 lat (usec): min=288, max=42109, avg=1258.65, stdev=5911.00 00:20:52.720 clat percentiles (usec): 00:20:52.720 | 1.00th=[ 293], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 322], 00:20:52.720 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 363], 60.00th=[ 400], 00:20:52.720 | 70.00th=[ 408], 80.00th=[ 416], 90.00th=[ 429], 95.00th=[ 445], 00:20:52.720 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:52.720 | 99.99th=[42206] 00:20:52.720 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:20:52.720 slat (nsec): min=10621, max=73974, avg=12649.62, stdev=4927.89 00:20:52.720 clat (usec): min=132, max=410, avg=239.25, stdev=32.61 00:20:52.720 lat (usec): min=172, max=426, avg=251.90, stdev=32.52 00:20:52.720 clat percentiles (usec): 00:20:52.720 | 1.00th=[ 169], 5.00th=[ 188], 10.00th=[ 204], 20.00th=[ 217], 00:20:52.720 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 243], 00:20:52.720 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 293], 00:20:52.720 | 99.00th=[ 334], 99.50th=[ 367], 99.90th=[ 396], 99.95th=[ 412], 00:20:52.720 | 99.99th=[ 412] 00:20:52.720 bw ( KiB/s): min= 7624, max= 7624, per=39.11%, avg=7624.00, stdev= 0.00, samples=1 00:20:52.720 iops : min= 1906, max= 1906, avg=1906.00, stdev= 0.00, samples=1 00:20:52.720 lat (usec) : 250=42.86%, 500=56.09%, 750=0.25% 00:20:52.720 lat (msec) : 50=0.81% 00:20:52.720 cpu : usr=1.40%, sys=2.30%, ctx=1612, majf=0, minf=1 00:20:52.720 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:52.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.720 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.720 issued rwts: total=586,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.720 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:52.720 job3: (groupid=0, jobs=1): err= 0: pid=3472219: Wed Apr 17 10:17:25 2024 00:20:52.720 read: IOPS=1471, BW=5886KiB/s (6027kB/s)(5892KiB/1001msec) 00:20:52.720 slat (nsec): min=6957, max=39978, avg=8138.48, stdev=1703.94 00:20:52.720 clat (usec): min=226, max=41239, avg=430.24, stdev=2143.66 00:20:52.720 lat (usec): min=234, max=41250, avg=438.38, stdev=2144.10 00:20:52.720 clat percentiles (usec): 00:20:52.720 | 1.00th=[ 245], 5.00th=[ 255], 10.00th=[ 262], 20.00th=[ 269], 00:20:52.720 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:20:52.720 | 70.00th=[ 302], 80.00th=[ 363], 90.00th=[ 412], 95.00th=[ 424], 00:20:52.720 | 99.00th=[ 494], 99.50th=[ 1188], 99.90th=[41157], 99.95th=[41157], 00:20:52.720 | 99.99th=[41157] 00:20:52.720 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:20:52.720 slat (nsec): min=10143, max=43010, avg=11680.54, stdev=1486.52 00:20:52.720 clat (usec): min=157, max=379, avg=212.90, stdev=34.85 00:20:52.720 lat (usec): min=169, max=390, avg=224.58, stdev=35.00 00:20:52.720 clat percentiles (usec): 00:20:52.720 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 188], 00:20:52.720 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 208], 00:20:52.720 | 70.00th=[ 215], 80.00th=[ 229], 90.00th=[ 285], 95.00th=[ 297], 00:20:52.720 | 99.00th=[ 310], 99.50th=[ 314], 99.90th=[ 326], 99.95th=[ 379], 00:20:52.720 | 99.99th=[ 379] 00:20:52.720 bw ( KiB/s): min= 8192, max= 8192, per=42.03%, avg=8192.00, stdev= 0.00, samples=1 00:20:52.720 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:52.720 lat (usec) : 250=45.40%, 500=54.20%, 750=0.10%, 1000=0.03% 00:20:52.720 lat (msec) : 2=0.07%, 4=0.03%, 20=0.03%, 50=0.13% 00:20:52.720 cpu : usr=1.80%, sys=5.50%, ctx=3009, majf=0, minf=1 00:20:52.720 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:52.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.720 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.720 issued rwts: total=1473,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.720 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:52.720 00:20:52.720 Run status group 0 (all jobs): 00:20:52.720 READ: bw=14.0MiB/s (14.7MB/s), 83.4KiB/s-6138KiB/s (85.4kB/s-6285kB/s), io=14.1MiB (14.8MB), run=1001-1007msec 00:20:52.720 WRITE: bw=19.0MiB/s (20.0MB/s), 2034KiB/s-7333KiB/s (2083kB/s-7509kB/s), io=19.2MiB (20.1MB), run=1001-1007msec 00:20:52.720 00:20:52.720 Disk stats (read/write): 00:20:52.720 nvme0n1: ios=67/512, merge=0/0, ticks=727/130, in_queue=857, util=87.17% 00:20:52.720 nvme0n2: ios=1291/1536, merge=0/0, ticks=1446/331, in_queue=1777, util=98.58% 00:20:52.720 nvme0n3: ios=544/1024, merge=0/0, ticks=1568/237, in_queue=1805, util=98.64% 00:20:52.720 nvme0n4: ios=1403/1536, merge=0/0, ticks=467/311, in_queue=778, util=89.61% 00:20:52.720 10:17:25 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:20:52.720 [global] 00:20:52.720 thread=1 00:20:52.720 invalidate=1 00:20:52.720 rw=randwrite 00:20:52.720 time_based=1 00:20:52.720 runtime=1 00:20:52.720 ioengine=libaio 00:20:52.720 direct=1 00:20:52.720 bs=4096 00:20:52.720 iodepth=1 00:20:52.720 norandommap=0 00:20:52.720 numjobs=1 00:20:52.720 00:20:52.720 verify_dump=1 00:20:52.720 verify_backlog=512 00:20:52.720 verify_state_save=0 00:20:52.720 do_verify=1 00:20:52.720 verify=crc32c-intel 00:20:52.720 [job0] 00:20:52.720 filename=/dev/nvme0n1 00:20:52.720 [job1] 00:20:52.720 filename=/dev/nvme0n2 00:20:52.720 [job2] 00:20:52.720 filename=/dev/nvme0n3 00:20:52.720 [job3] 00:20:52.720 filename=/dev/nvme0n4 00:20:52.720 Could not set queue depth (nvme0n1) 00:20:52.720 Could not set queue depth (nvme0n2) 00:20:52.720 Could not set queue depth (nvme0n3) 00:20:52.720 Could not set queue depth (nvme0n4) 00:20:52.982 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:52.982 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:52.982 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:52.982 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:52.982 fio-3.35 00:20:52.982 Starting 4 threads 00:20:54.378 00:20:54.378 job0: (groupid=0, jobs=1): err= 0: pid=3472645: Wed Apr 17 10:17:27 2024 00:20:54.378 read: IOPS=1531, BW=6126KiB/s (6273kB/s)(6132KiB/1001msec) 00:20:54.378 slat (nsec): min=7358, max=27871, avg=8408.53, stdev=1159.69 00:20:54.378 clat (usec): min=298, max=809, avg=386.08, stdev=49.76 00:20:54.378 lat (usec): min=308, max=817, avg=394.49, stdev=49.76 00:20:54.378 clat percentiles (usec): 00:20:54.378 | 1.00th=[ 318], 5.00th=[ 334], 10.00th=[ 343], 20.00th=[ 351], 00:20:54.378 | 30.00th=[ 359], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 379], 00:20:54.378 | 70.00th=[ 392], 80.00th=[ 424], 90.00th=[ 465], 95.00th=[ 490], 00:20:54.378 | 99.00th=[ 523], 99.50th=[ 537], 99.90th=[ 635], 99.95th=[ 807], 00:20:54.378 | 99.99th=[ 807] 00:20:54.378 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:20:54.378 slat (nsec): min=10761, max=45051, avg=12268.60, stdev=2134.98 00:20:54.378 clat (usec): min=178, max=467, avg=238.72, stdev=40.07 00:20:54.378 lat (usec): min=190, max=478, avg=250.98, stdev=40.23 00:20:54.378 clat percentiles (usec): 00:20:54.378 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 210], 00:20:54.378 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 235], 00:20:54.378 | 70.00th=[ 243], 80.00th=[ 265], 90.00th=[ 289], 95.00th=[ 326], 00:20:54.378 | 99.00th=[ 388], 99.50th=[ 404], 99.90th=[ 461], 99.95th=[ 469], 00:20:54.378 | 99.99th=[ 469] 00:20:54.378 bw ( KiB/s): min= 8192, max= 8192, per=47.27%, avg=8192.00, stdev= 0.00, samples=1 00:20:54.378 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:54.378 lat (usec) : 250=37.28%, 500=61.03%, 750=1.66%, 1000=0.03% 00:20:54.378 cpu : usr=2.90%, sys=4.80%, ctx=3070, majf=0, minf=1 00:20:54.378 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:54.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.378 issued rwts: total=1533,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.378 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:54.378 job1: (groupid=0, jobs=1): err= 0: pid=3472647: Wed Apr 17 10:17:27 2024 00:20:54.378 read: IOPS=21, BW=85.8KiB/s (87.8kB/s)(88.0KiB/1026msec) 00:20:54.378 slat (nsec): min=9925, max=23683, avg=21460.18, stdev=2676.90 00:20:54.378 clat (usec): min=40834, max=41376, avg=40986.91, stdev=103.89 00:20:54.378 lat (usec): min=40856, max=41386, avg=41008.37, stdev=101.79 00:20:54.378 clat percentiles (usec): 00:20:54.378 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:20:54.378 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:54.378 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:20:54.378 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:20:54.378 | 99.99th=[41157] 00:20:54.378 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:20:54.378 slat (nsec): min=10503, max=38670, avg=11969.23, stdev=2097.57 00:20:54.378 clat (usec): min=190, max=340, avg=225.97, stdev=16.41 00:20:54.378 lat (usec): min=202, max=379, avg=237.94, stdev=17.06 00:20:54.378 clat percentiles (usec): 00:20:54.378 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 212], 00:20:54.378 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 229], 00:20:54.378 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 245], 95.00th=[ 253], 00:20:54.378 | 99.00th=[ 277], 99.50th=[ 302], 99.90th=[ 343], 99.95th=[ 343], 00:20:54.378 | 99.99th=[ 343] 00:20:54.378 bw ( KiB/s): min= 4096, max= 4096, per=23.64%, avg=4096.00, stdev= 0.00, samples=1 00:20:54.378 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:54.378 lat (usec) : 250=88.95%, 500=6.93% 00:20:54.378 lat (msec) : 50=4.12% 00:20:54.378 cpu : usr=0.59%, sys=0.68%, ctx=535, majf=0, minf=2 00:20:54.378 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:54.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.378 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.378 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:54.378 job2: (groupid=0, jobs=1): err= 0: pid=3472648: Wed Apr 17 10:17:27 2024 00:20:54.378 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:20:54.378 slat (nsec): min=7101, max=39040, avg=8104.34, stdev=1605.25 00:20:54.378 clat (usec): min=272, max=503, avg=332.29, stdev=18.83 00:20:54.378 lat (usec): min=294, max=512, avg=340.40, stdev=18.83 00:20:54.378 clat percentiles (usec): 00:20:54.378 | 1.00th=[ 297], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 318], 00:20:54.378 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 334], 00:20:54.378 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 355], 95.00th=[ 363], 00:20:54.378 | 99.00th=[ 388], 99.50th=[ 408], 99.90th=[ 490], 99.95th=[ 506], 00:20:54.378 | 99.99th=[ 506] 00:20:54.378 write: IOPS=1883, BW=7532KiB/s (7713kB/s)(7540KiB/1001msec); 0 zone resets 00:20:54.378 slat (nsec): min=10362, max=50491, avg=11733.34, stdev=1851.42 00:20:54.378 clat (usec): min=169, max=415, avg=235.80, stdev=20.89 00:20:54.378 lat (usec): min=197, max=456, avg=247.54, stdev=21.11 00:20:54.378 clat percentiles (usec): 00:20:54.378 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 219], 00:20:54.378 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 239], 00:20:54.378 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 269], 00:20:54.378 | 99.00th=[ 289], 99.50th=[ 310], 99.90th=[ 400], 99.95th=[ 416], 00:20:54.378 | 99.99th=[ 416] 00:20:54.378 bw ( KiB/s): min= 8192, max= 8192, per=47.27%, avg=8192.00, stdev= 0.00, samples=1 00:20:54.378 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:54.378 lat (usec) : 250=42.82%, 500=57.15%, 750=0.03% 00:20:54.378 cpu : usr=2.70%, sys=5.70%, ctx=3421, majf=0, minf=1 00:20:54.378 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:54.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.378 issued rwts: total=1536,1885,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.378 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:54.378 job3: (groupid=0, jobs=1): err= 0: pid=3472649: Wed Apr 17 10:17:27 2024 00:20:54.378 read: IOPS=20, BW=83.0KiB/s (85.0kB/s)(84.0KiB/1012msec) 00:20:54.378 slat (nsec): min=10054, max=24391, avg=22144.19, stdev=2959.27 00:20:54.378 clat (usec): min=40848, max=41407, avg=40988.61, stdev=118.98 00:20:54.378 lat (usec): min=40870, max=41417, avg=41010.76, stdev=116.90 00:20:54.378 clat percentiles (usec): 00:20:54.378 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:20:54.378 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:54.378 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:20:54.378 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:20:54.378 | 99.99th=[41157] 00:20:54.378 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:20:54.378 slat (nsec): min=10810, max=36649, avg=12086.75, stdev=1642.59 00:20:54.378 clat (usec): min=197, max=436, avg=278.47, stdev=39.67 00:20:54.378 lat (usec): min=209, max=473, avg=290.56, stdev=39.85 00:20:54.378 clat percentiles (usec): 00:20:54.378 | 1.00th=[ 208], 5.00th=[ 229], 10.00th=[ 237], 20.00th=[ 247], 00:20:54.378 | 30.00th=[ 253], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:20:54.378 | 70.00th=[ 293], 80.00th=[ 318], 90.00th=[ 330], 95.00th=[ 351], 00:20:54.378 | 99.00th=[ 400], 99.50th=[ 412], 99.90th=[ 437], 99.95th=[ 437], 00:20:54.378 | 99.99th=[ 437] 00:20:54.378 bw ( KiB/s): min= 4096, max= 4096, per=23.64%, avg=4096.00, stdev= 0.00, samples=1 00:20:54.378 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:54.378 lat (usec) : 250=23.64%, 500=72.42% 00:20:54.378 lat (msec) : 50=3.94% 00:20:54.378 cpu : usr=0.30%, sys=0.99%, ctx=534, majf=0, minf=1 00:20:54.378 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:54.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.378 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.378 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:54.378 00:20:54.378 Run status group 0 (all jobs): 00:20:54.378 READ: bw=11.8MiB/s (12.4MB/s), 83.0KiB/s-6138KiB/s (85.0kB/s-6285kB/s), io=12.2MiB (12.7MB), run=1001-1026msec 00:20:54.378 WRITE: bw=16.9MiB/s (17.7MB/s), 1996KiB/s-7532KiB/s (2044kB/s-7713kB/s), io=17.4MiB (18.2MB), run=1001-1026msec 00:20:54.378 00:20:54.378 Disk stats (read/write): 00:20:54.378 nvme0n1: ios=1195/1536, merge=0/0, ticks=1292/344, in_queue=1636, util=85.17% 00:20:54.378 nvme0n2: ios=39/512, merge=0/0, ticks=1605/111, in_queue=1716, util=89.15% 00:20:54.378 nvme0n3: ios=1387/1536, merge=0/0, ticks=501/350, in_queue=851, util=94.43% 00:20:54.379 nvme0n4: ios=39/512, merge=0/0, ticks=1560/138, in_queue=1698, util=94.29% 00:20:54.379 10:17:27 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:20:54.379 [global] 00:20:54.379 thread=1 00:20:54.379 invalidate=1 00:20:54.379 rw=write 00:20:54.379 time_based=1 00:20:54.379 runtime=1 00:20:54.379 ioengine=libaio 00:20:54.379 direct=1 00:20:54.379 bs=4096 00:20:54.379 iodepth=128 00:20:54.379 norandommap=0 00:20:54.379 numjobs=1 00:20:54.379 00:20:54.379 verify_dump=1 00:20:54.379 verify_backlog=512 00:20:54.379 verify_state_save=0 00:20:54.379 do_verify=1 00:20:54.379 verify=crc32c-intel 00:20:54.379 [job0] 00:20:54.379 filename=/dev/nvme0n1 00:20:54.379 [job1] 00:20:54.379 filename=/dev/nvme0n2 00:20:54.379 [job2] 00:20:54.379 filename=/dev/nvme0n3 00:20:54.379 [job3] 00:20:54.379 filename=/dev/nvme0n4 00:20:54.379 Could not set queue depth (nvme0n1) 00:20:54.379 Could not set queue depth (nvme0n2) 00:20:54.379 Could not set queue depth (nvme0n3) 00:20:54.379 Could not set queue depth (nvme0n4) 00:20:54.639 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:54.639 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:54.639 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:54.639 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:54.639 fio-3.35 00:20:54.640 Starting 4 threads 00:20:56.030 00:20:56.030 job0: (groupid=0, jobs=1): err= 0: pid=3473068: Wed Apr 17 10:17:29 2024 00:20:56.030 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:20:56.030 slat (nsec): min=1957, max=16002k, avg=118435.24, stdev=740117.77 00:20:56.030 clat (usec): min=7897, max=42463, avg=14780.75, stdev=4111.09 00:20:56.030 lat (usec): min=7903, max=42483, avg=14899.18, stdev=4175.34 00:20:56.030 clat percentiles (usec): 00:20:56.030 | 1.00th=[ 8848], 5.00th=[10421], 10.00th=[11469], 20.00th=[12518], 00:20:56.030 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13566], 60.00th=[14484], 00:20:56.030 | 70.00th=[14746], 80.00th=[16057], 90.00th=[19006], 95.00th=[26346], 00:20:56.030 | 99.00th=[30278], 99.50th=[30802], 99.90th=[32637], 99.95th=[39060], 00:20:56.030 | 99.99th=[42206] 00:20:56.030 write: IOPS=4471, BW=17.5MiB/s (18.3MB/s)(17.6MiB/1007msec); 0 zone resets 00:20:56.030 slat (usec): min=3, max=8195, avg=107.50, stdev=577.39 00:20:56.030 clat (usec): min=5982, max=28555, avg=14824.78, stdev=2978.55 00:20:56.030 lat (usec): min=6510, max=28562, avg=14932.29, stdev=3013.03 00:20:56.030 clat percentiles (usec): 00:20:56.030 | 1.00th=[ 8455], 5.00th=[10814], 10.00th=[12780], 20.00th=[13435], 00:20:56.030 | 30.00th=[13698], 40.00th=[13829], 50.00th=[14091], 60.00th=[14484], 00:20:56.030 | 70.00th=[15533], 80.00th=[16188], 90.00th=[17171], 95.00th=[19006], 00:20:56.030 | 99.00th=[28443], 99.50th=[28443], 99.90th=[28443], 99.95th=[28443], 00:20:56.030 | 99.99th=[28443] 00:20:56.030 bw ( KiB/s): min=16384, max=18624, per=27.48%, avg=17504.00, stdev=1583.92, samples=2 00:20:56.030 iops : min= 4096, max= 4656, avg=4376.00, stdev=395.98, samples=2 00:20:56.030 lat (msec) : 10=3.12%, 20=90.74%, 50=6.14% 00:20:56.030 cpu : usr=4.37%, sys=5.77%, ctx=474, majf=0, minf=1 00:20:56.030 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:20:56.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:56.030 issued rwts: total=4096,4503,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.030 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:56.030 job1: (groupid=0, jobs=1): err= 0: pid=3473069: Wed Apr 17 10:17:29 2024 00:20:56.030 read: IOPS=4066, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1009msec) 00:20:56.030 slat (usec): min=2, max=15159, avg=111.38, stdev=683.39 00:20:56.030 clat (usec): min=6361, max=27751, avg=13612.71, stdev=2379.88 00:20:56.030 lat (usec): min=7050, max=27768, avg=13724.08, stdev=2429.94 00:20:56.030 clat percentiles (usec): 00:20:56.030 | 1.00th=[ 8848], 5.00th=[10028], 10.00th=[11207], 20.00th=[12387], 00:20:56.030 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13042], 60.00th=[13304], 00:20:56.030 | 70.00th=[13960], 80.00th=[14877], 90.00th=[16581], 95.00th=[18744], 00:20:56.030 | 99.00th=[21627], 99.50th=[21890], 99.90th=[21890], 99.95th=[21890], 00:20:56.030 | 99.99th=[27657] 00:20:56.030 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:20:56.030 slat (usec): min=3, max=26360, avg=111.37, stdev=726.34 00:20:56.030 clat (usec): min=7056, max=43538, avg=15576.40, stdev=5204.32 00:20:56.030 lat (usec): min=7067, max=43571, avg=15687.77, stdev=5233.93 00:20:56.030 clat percentiles (usec): 00:20:56.030 | 1.00th=[ 8455], 5.00th=[11863], 10.00th=[12911], 20.00th=[13304], 00:20:56.030 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14484], 60.00th=[15008], 00:20:56.030 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16909], 95.00th=[30802], 00:20:56.030 | 99.00th=[38011], 99.50th=[40633], 99.90th=[42730], 99.95th=[42730], 00:20:56.030 | 99.99th=[43779] 00:20:56.030 bw ( KiB/s): min=17408, max=18488, per=28.18%, avg=17948.00, stdev=763.68, samples=2 00:20:56.030 iops : min= 4352, max= 4622, avg=4487.00, stdev=190.92, samples=2 00:20:56.030 lat (msec) : 10=3.56%, 20=91.30%, 50=5.14% 00:20:56.030 cpu : usr=4.07%, sys=6.15%, ctx=431, majf=0, minf=1 00:20:56.030 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:20:56.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:56.030 issued rwts: total=4103,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.030 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:56.030 job2: (groupid=0, jobs=1): err= 0: pid=3473070: Wed Apr 17 10:17:29 2024 00:20:56.030 read: IOPS=3538, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1013msec) 00:20:56.030 slat (nsec): min=1634, max=23944k, avg=116749.32, stdev=1087359.39 00:20:56.030 clat (usec): min=2961, max=48091, avg=16964.72, stdev=7786.94 00:20:56.030 lat (usec): min=2971, max=48752, avg=17081.47, stdev=7876.87 00:20:56.030 clat percentiles (usec): 00:20:56.030 | 1.00th=[ 3359], 5.00th=[ 4817], 10.00th=[ 9503], 20.00th=[10814], 00:20:56.030 | 30.00th=[11863], 40.00th=[13698], 50.00th=[14615], 60.00th=[16450], 00:20:56.030 | 70.00th=[19792], 80.00th=[25560], 90.00th=[28443], 95.00th=[29754], 00:20:56.030 | 99.00th=[33817], 99.50th=[47449], 99.90th=[47973], 99.95th=[47973], 00:20:56.030 | 99.99th=[47973] 00:20:56.030 write: IOPS=3923, BW=15.3MiB/s (16.1MB/s)(15.5MiB/1013msec); 0 zone resets 00:20:56.030 slat (usec): min=2, max=24317, avg=115.39, stdev=1009.29 00:20:56.030 clat (usec): min=1247, max=51511, avg=17000.01, stdev=8387.41 00:20:56.030 lat (usec): min=1254, max=51520, avg=17115.40, stdev=8455.29 00:20:56.030 clat percentiles (usec): 00:20:56.030 | 1.00th=[ 2606], 5.00th=[ 5800], 10.00th=[ 7832], 20.00th=[11731], 00:20:56.030 | 30.00th=[12780], 40.00th=[13304], 50.00th=[14746], 60.00th=[16909], 00:20:56.030 | 70.00th=[19530], 80.00th=[22152], 90.00th=[26346], 95.00th=[32113], 00:20:56.030 | 99.00th=[47449], 99.50th=[49546], 99.90th=[51643], 99.95th=[51643], 00:20:56.031 | 99.99th=[51643] 00:20:56.031 bw ( KiB/s): min=14392, max=16384, per=24.16%, avg=15388.00, stdev=1408.56, samples=2 00:20:56.031 iops : min= 3598, max= 4096, avg=3847.00, stdev=352.14, samples=2 00:20:56.031 lat (msec) : 2=0.28%, 4=2.58%, 10=10.97%, 20=57.85%, 50=28.16% 00:20:56.031 lat (msec) : 100=0.17% 00:20:56.031 cpu : usr=3.46%, sys=4.25%, ctx=281, majf=0, minf=1 00:20:56.031 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:56.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:56.031 issued rwts: total=3584,3974,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.031 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:56.031 job3: (groupid=0, jobs=1): err= 0: pid=3473071: Wed Apr 17 10:17:29 2024 00:20:56.031 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:20:56.031 slat (nsec): min=1934, max=38813k, avg=211491.32, stdev=1641045.95 00:20:56.031 clat (usec): min=6038, max=71389, avg=28566.43, stdev=14741.22 00:20:56.031 lat (usec): min=6044, max=71415, avg=28777.92, stdev=14873.70 00:20:56.031 clat percentiles (usec): 00:20:56.031 | 1.00th=[11863], 5.00th=[13042], 10.00th=[14091], 20.00th=[15533], 00:20:56.031 | 30.00th=[18482], 40.00th=[20579], 50.00th=[23200], 60.00th=[26870], 00:20:56.031 | 70.00th=[31065], 80.00th=[43779], 90.00th=[54264], 95.00th=[56361], 00:20:56.031 | 99.00th=[64750], 99.50th=[65274], 99.90th=[67634], 99.95th=[70779], 00:20:56.031 | 99.99th=[71828] 00:20:56.031 write: IOPS=3031, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1004msec); 0 zone resets 00:20:56.031 slat (usec): min=3, max=15809, avg=123.88, stdev=939.55 00:20:56.031 clat (usec): min=561, max=59007, avg=17582.04, stdev=9195.75 00:20:56.031 lat (usec): min=591, max=59020, avg=17705.92, stdev=9255.26 00:20:56.031 clat percentiles (usec): 00:20:56.031 | 1.00th=[ 2966], 5.00th=[ 3458], 10.00th=[ 7177], 20.00th=[13698], 00:20:56.031 | 30.00th=[16188], 40.00th=[16712], 50.00th=[17171], 60.00th=[17957], 00:20:56.031 | 70.00th=[18220], 80.00th=[18482], 90.00th=[21890], 95.00th=[33162], 00:20:56.031 | 99.00th=[56361], 99.50th=[56886], 99.90th=[57410], 99.95th=[57410], 00:20:56.031 | 99.99th=[58983] 00:20:56.031 bw ( KiB/s): min=11048, max=12288, per=18.32%, avg=11668.00, stdev=876.81, samples=2 00:20:56.031 iops : min= 2762, max= 3072, avg=2917.00, stdev=219.20, samples=2 00:20:56.031 lat (usec) : 750=0.02%, 1000=0.02% 00:20:56.031 lat (msec) : 2=0.09%, 4=3.23%, 10=3.16%, 20=56.92%, 50=28.14% 00:20:56.031 lat (msec) : 100=8.42% 00:20:56.031 cpu : usr=3.09%, sys=3.49%, ctx=223, majf=0, minf=1 00:20:56.031 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:20:56.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:56.031 issued rwts: total=2560,3044,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.031 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:56.031 00:20:56.031 Run status group 0 (all jobs): 00:20:56.031 READ: bw=55.3MiB/s (58.0MB/s), 9.96MiB/s-15.9MiB/s (10.4MB/s-16.7MB/s), io=56.0MiB (58.7MB), run=1004-1013msec 00:20:56.031 WRITE: bw=62.2MiB/s (65.2MB/s), 11.8MiB/s-17.8MiB/s (12.4MB/s-18.7MB/s), io=63.0MiB (66.1MB), run=1004-1013msec 00:20:56.031 00:20:56.031 Disk stats (read/write): 00:20:56.031 nvme0n1: ios=3375/3584, merge=0/0, ticks=26588/25202, in_queue=51790, util=87.78% 00:20:56.031 nvme0n2: ios=3634/3663, merge=0/0, ticks=25362/26100, in_queue=51462, util=89.38% 00:20:56.031 nvme0n3: ios=3085/3333, merge=0/0, ticks=45214/43718, in_queue=88932, util=92.47% 00:20:56.031 nvme0n4: ios=2284/2560, merge=0/0, ticks=53799/44087, in_queue=97886, util=93.34% 00:20:56.031 10:17:29 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:20:56.031 [global] 00:20:56.031 thread=1 00:20:56.031 invalidate=1 00:20:56.031 rw=randwrite 00:20:56.031 time_based=1 00:20:56.031 runtime=1 00:20:56.031 ioengine=libaio 00:20:56.031 direct=1 00:20:56.031 bs=4096 00:20:56.031 iodepth=128 00:20:56.031 norandommap=0 00:20:56.031 numjobs=1 00:20:56.031 00:20:56.031 verify_dump=1 00:20:56.031 verify_backlog=512 00:20:56.031 verify_state_save=0 00:20:56.031 do_verify=1 00:20:56.031 verify=crc32c-intel 00:20:56.031 [job0] 00:20:56.031 filename=/dev/nvme0n1 00:20:56.031 [job1] 00:20:56.031 filename=/dev/nvme0n2 00:20:56.031 [job2] 00:20:56.031 filename=/dev/nvme0n3 00:20:56.031 [job3] 00:20:56.031 filename=/dev/nvme0n4 00:20:56.031 Could not set queue depth (nvme0n1) 00:20:56.031 Could not set queue depth (nvme0n2) 00:20:56.031 Could not set queue depth (nvme0n3) 00:20:56.031 Could not set queue depth (nvme0n4) 00:20:56.290 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:56.290 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:56.290 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:56.290 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:56.290 fio-3.35 00:20:56.290 Starting 4 threads 00:20:57.661 00:20:57.661 job0: (groupid=0, jobs=1): err= 0: pid=3473498: Wed Apr 17 10:17:30 2024 00:20:57.661 read: IOPS=3766, BW=14.7MiB/s (15.4MB/s)(14.7MiB/1002msec) 00:20:57.661 slat (nsec): min=1540, max=15384k, avg=112407.84, stdev=737157.99 00:20:57.661 clat (usec): min=637, max=42385, avg=13732.81, stdev=4126.25 00:20:57.661 lat (usec): min=2429, max=42411, avg=13845.22, stdev=4182.68 00:20:57.661 clat percentiles (usec): 00:20:57.661 | 1.00th=[ 5276], 5.00th=[ 7570], 10.00th=[ 9896], 20.00th=[11600], 00:20:57.661 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13304], 60.00th=[13698], 00:20:57.661 | 70.00th=[14222], 80.00th=[15795], 90.00th=[17957], 95.00th=[20841], 00:20:57.661 | 99.00th=[30802], 99.50th=[34341], 99.90th=[41681], 99.95th=[41681], 00:20:57.661 | 99.99th=[42206] 00:20:57.661 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:20:57.661 slat (usec): min=2, max=9114, avg=128.13, stdev=578.06 00:20:57.661 clat (usec): min=1405, max=58715, avg=18259.32, stdev=10428.92 00:20:57.661 lat (usec): min=2286, max=58721, avg=18387.46, stdev=10495.25 00:20:57.661 clat percentiles (usec): 00:20:57.661 | 1.00th=[ 3490], 5.00th=[ 7832], 10.00th=[ 9896], 20.00th=[12518], 00:20:57.661 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13829], 60.00th=[14877], 00:20:57.661 | 70.00th=[17433], 80.00th=[26608], 90.00th=[32900], 95.00th=[43254], 00:20:57.661 | 99.00th=[52691], 99.50th=[58459], 99.90th=[58459], 99.95th=[58459], 00:20:57.661 | 99.99th=[58459] 00:20:57.661 bw ( KiB/s): min=12288, max=20480, per=25.23%, avg=16384.00, stdev=5792.62, samples=2 00:20:57.661 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:20:57.661 lat (usec) : 750=0.01% 00:20:57.661 lat (msec) : 2=0.01%, 4=0.89%, 10=9.36%, 20=72.76%, 50=15.79% 00:20:57.661 lat (msec) : 100=1.17% 00:20:57.661 cpu : usr=2.60%, sys=3.50%, ctx=535, majf=0, minf=1 00:20:57.661 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:57.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:57.661 issued rwts: total=3774,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.661 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:57.661 job1: (groupid=0, jobs=1): err= 0: pid=3473499: Wed Apr 17 10:17:30 2024 00:20:57.661 read: IOPS=4998, BW=19.5MiB/s (20.5MB/s)(19.6MiB/1003msec) 00:20:57.661 slat (nsec): min=1976, max=11267k, avg=86638.63, stdev=576940.95 00:20:57.661 clat (usec): min=2409, max=29480, avg=11381.09, stdev=3676.50 00:20:57.661 lat (usec): min=2414, max=29485, avg=11467.72, stdev=3722.23 00:20:57.661 clat percentiles (usec): 00:20:57.661 | 1.00th=[ 6456], 5.00th=[ 7767], 10.00th=[ 8455], 20.00th=[ 9110], 00:20:57.661 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10552], 00:20:57.661 | 70.00th=[12387], 80.00th=[13960], 90.00th=[17171], 95.00th=[17957], 00:20:57.661 | 99.00th=[25297], 99.50th=[25297], 99.90th=[29492], 99.95th=[29492], 00:20:57.661 | 99.99th=[29492] 00:20:57.661 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:20:57.661 slat (usec): min=2, max=13018, avg=100.47, stdev=614.84 00:20:57.661 clat (usec): min=719, max=70189, avg=13613.65, stdev=9704.16 00:20:57.661 lat (usec): min=2340, max=70198, avg=13714.13, stdev=9765.03 00:20:57.661 clat percentiles (usec): 00:20:57.661 | 1.00th=[ 4817], 5.00th=[ 8029], 10.00th=[ 8979], 20.00th=[ 9503], 00:20:57.661 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10552], 00:20:57.661 | 70.00th=[13435], 80.00th=[15008], 90.00th=[22152], 95.00th=[27132], 00:20:57.661 | 99.00th=[69731], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:20:57.661 | 99.99th=[69731] 00:20:57.661 bw ( KiB/s): min=15136, max=25824, per=31.53%, avg=20480.00, stdev=7557.56, samples=2 00:20:57.661 iops : min= 3784, max= 6456, avg=5120.00, stdev=1889.39, samples=2 00:20:57.661 lat (usec) : 750=0.01% 00:20:57.661 lat (msec) : 4=0.78%, 10=48.90%, 20=43.08%, 50=6.06%, 100=1.17% 00:20:57.661 cpu : usr=4.19%, sys=5.69%, ctx=433, majf=0, minf=1 00:20:57.661 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:20:57.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:57.661 issued rwts: total=5013,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.661 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:57.662 job2: (groupid=0, jobs=1): err= 0: pid=3473500: Wed Apr 17 10:17:30 2024 00:20:57.662 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:20:57.662 slat (nsec): min=1871, max=29153k, avg=221211.14, stdev=1396738.19 00:20:57.662 clat (msec): min=8, max=121, avg=26.76, stdev=20.08 00:20:57.662 lat (msec): min=8, max=121, avg=26.98, stdev=20.22 00:20:57.662 clat percentiles (msec): 00:20:57.662 | 1.00th=[ 10], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 15], 00:20:57.662 | 30.00th=[ 16], 40.00th=[ 20], 50.00th=[ 23], 60.00th=[ 24], 00:20:57.662 | 70.00th=[ 26], 80.00th=[ 28], 90.00th=[ 52], 95.00th=[ 71], 00:20:57.662 | 99.00th=[ 105], 99.50th=[ 106], 99.90th=[ 116], 99.95th=[ 120], 00:20:57.662 | 99.99th=[ 122] 00:20:57.662 write: IOPS=2594, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1006msec); 0 zone resets 00:20:57.662 slat (usec): min=3, max=13497, avg=160.22, stdev=811.11 00:20:57.662 clat (usec): min=4671, max=77226, avg=22540.67, stdev=13650.18 00:20:57.662 lat (usec): min=6153, max=77233, avg=22700.88, stdev=13723.86 00:20:57.662 clat percentiles (usec): 00:20:57.662 | 1.00th=[ 8160], 5.00th=[12518], 10.00th=[14484], 20.00th=[15664], 00:20:57.662 | 30.00th=[16057], 40.00th=[17171], 50.00th=[17695], 60.00th=[18220], 00:20:57.662 | 70.00th=[19006], 80.00th=[28443], 90.00th=[32900], 95.00th=[57934], 00:20:57.662 | 99.00th=[77071], 99.50th=[77071], 99.90th=[77071], 99.95th=[77071], 00:20:57.662 | 99.99th=[77071] 00:20:57.662 bw ( KiB/s): min= 6552, max=13928, per=15.77%, avg=10240.00, stdev=5215.62, samples=2 00:20:57.662 iops : min= 1638, max= 3482, avg=2560.00, stdev=1303.90, samples=2 00:20:57.662 lat (msec) : 10=1.39%, 20=56.60%, 50=32.94%, 100=8.12%, 250=0.95% 00:20:57.662 cpu : usr=2.39%, sys=3.08%, ctx=330, majf=0, minf=1 00:20:57.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:20:57.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:57.662 issued rwts: total=2560,2610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.662 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:57.662 job3: (groupid=0, jobs=1): err= 0: pid=3473501: Wed Apr 17 10:17:30 2024 00:20:57.662 read: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec) 00:20:57.662 slat (usec): min=2, max=15133, avg=106.12, stdev=783.74 00:20:57.662 clat (usec): min=4422, max=43182, avg=13657.16, stdev=5084.83 00:20:57.662 lat (usec): min=4428, max=43186, avg=13763.28, stdev=5131.57 00:20:57.662 clat percentiles (usec): 00:20:57.662 | 1.00th=[ 5997], 5.00th=[ 7832], 10.00th=[ 8717], 20.00th=[ 9503], 00:20:57.662 | 30.00th=[10028], 40.00th=[10683], 50.00th=[12780], 60.00th=[14484], 00:20:57.662 | 70.00th=[16450], 80.00th=[17171], 90.00th=[18482], 95.00th=[21103], 00:20:57.662 | 99.00th=[33817], 99.50th=[39584], 99.90th=[43254], 99.95th=[43254], 00:20:57.662 | 99.99th=[43254] 00:20:57.662 write: IOPS=4551, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1012msec); 0 zone resets 00:20:57.662 slat (usec): min=3, max=14981, avg=115.93, stdev=732.93 00:20:57.662 clat (usec): min=1543, max=48557, avg=15622.36, stdev=9453.81 00:20:57.662 lat (usec): min=1556, max=48571, avg=15738.30, stdev=9519.46 00:20:57.662 clat percentiles (usec): 00:20:57.662 | 1.00th=[ 4490], 5.00th=[ 5932], 10.00th=[ 6652], 20.00th=[ 8455], 00:20:57.662 | 30.00th=[10814], 40.00th=[11207], 50.00th=[12387], 60.00th=[13960], 00:20:57.662 | 70.00th=[15270], 80.00th=[23462], 90.00th=[30802], 95.00th=[38011], 00:20:57.662 | 99.00th=[42730], 99.50th=[43254], 99.90th=[48497], 99.95th=[48497], 00:20:57.662 | 99.99th=[48497] 00:20:57.662 bw ( KiB/s): min=11256, max=24576, per=27.58%, avg=17916.00, stdev=9418.66, samples=2 00:20:57.662 iops : min= 2814, max= 6144, avg=4479.00, stdev=2354.67, samples=2 00:20:57.662 lat (msec) : 2=0.03%, 4=0.23%, 10=27.98%, 20=57.80%, 50=13.95% 00:20:57.662 cpu : usr=4.06%, sys=5.04%, ctx=394, majf=0, minf=1 00:20:57.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:20:57.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:57.662 issued rwts: total=4096,4606,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.662 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:57.662 00:20:57.662 Run status group 0 (all jobs): 00:20:57.662 READ: bw=59.6MiB/s (62.5MB/s), 9.94MiB/s-19.5MiB/s (10.4MB/s-20.5MB/s), io=60.3MiB (63.3MB), run=1002-1012msec 00:20:57.662 WRITE: bw=63.4MiB/s (66.5MB/s), 10.1MiB/s-19.9MiB/s (10.6MB/s-20.9MB/s), io=64.2MiB (67.3MB), run=1002-1012msec 00:20:57.662 00:20:57.662 Disk stats (read/write): 00:20:57.662 nvme0n1: ios=3105/3375, merge=0/0, ticks=35598/54254, in_queue=89852, util=97.60% 00:20:57.662 nvme0n2: ios=4048/4096, merge=0/0, ticks=25170/28945, in_queue=54115, util=98.27% 00:20:57.662 nvme0n3: ios=2048/2447, merge=0/0, ticks=21038/19501, in_queue=40539, util=88.77% 00:20:57.662 nvme0n4: ios=3885/4096, merge=0/0, ticks=46195/53194, in_queue=99389, util=98.22% 00:20:57.662 10:17:30 -- target/fio.sh@55 -- # sync 00:20:57.662 10:17:30 -- target/fio.sh@59 -- # fio_pid=3473765 00:20:57.662 10:17:30 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:20:57.662 10:17:30 -- target/fio.sh@61 -- # sleep 3 00:20:57.662 [global] 00:20:57.662 thread=1 00:20:57.662 invalidate=1 00:20:57.662 rw=read 00:20:57.662 time_based=1 00:20:57.662 runtime=10 00:20:57.662 ioengine=libaio 00:20:57.662 direct=1 00:20:57.662 bs=4096 00:20:57.662 iodepth=1 00:20:57.662 norandommap=1 00:20:57.662 numjobs=1 00:20:57.662 00:20:57.662 [job0] 00:20:57.662 filename=/dev/nvme0n1 00:20:57.662 [job1] 00:20:57.662 filename=/dev/nvme0n2 00:20:57.662 [job2] 00:20:57.662 filename=/dev/nvme0n3 00:20:57.662 [job3] 00:20:57.662 filename=/dev/nvme0n4 00:20:57.662 Could not set queue depth (nvme0n1) 00:20:57.662 Could not set queue depth (nvme0n2) 00:20:57.662 Could not set queue depth (nvme0n3) 00:20:57.662 Could not set queue depth (nvme0n4) 00:20:57.919 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:57.919 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:57.919 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:57.919 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:57.919 fio-3.35 00:20:57.919 Starting 4 threads 00:21:00.439 10:17:33 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:21:00.696 10:17:33 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:21:00.696 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=266240, buflen=4096 00:21:00.696 fio: pid=3473924, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:00.952 10:17:34 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:00.952 10:17:34 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:21:00.952 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=23928832, buflen=4096 00:21:00.952 fio: pid=3473923, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:01.209 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=315392, buflen=4096 00:21:01.209 fio: pid=3473921, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:01.209 10:17:34 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:01.209 10:17:34 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:21:01.467 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=348160, buflen=4096 00:21:01.467 fio: pid=3473922, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:01.467 10:17:34 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:01.467 10:17:34 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:21:01.467 00:21:01.467 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3473921: Wed Apr 17 10:17:34 2024 00:21:01.467 read: IOPS=24, BW=96.6KiB/s (98.9kB/s)(308KiB/3190msec) 00:21:01.467 slat (usec): min=9, max=26812, avg=514.94, stdev=3291.67 00:21:01.467 clat (usec): min=672, max=42243, avg=40628.54, stdev=4629.65 00:21:01.467 lat (usec): min=703, max=68133, avg=41149.88, stdev=5777.71 00:21:01.467 clat percentiles (usec): 00:21:01.467 | 1.00th=[ 676], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:21:01.467 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:21:01.467 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:21:01.467 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:21:01.467 | 99.99th=[42206] 00:21:01.467 bw ( KiB/s): min= 92, max= 104, per=1.36%, avg=96.67, stdev= 3.93, samples=6 00:21:01.467 iops : min= 23, max= 26, avg=24.17, stdev= 0.98, samples=6 00:21:01.467 lat (usec) : 750=1.28% 00:21:01.467 lat (msec) : 50=97.44% 00:21:01.467 cpu : usr=0.09%, sys=0.00%, ctx=80, majf=0, minf=1 00:21:01.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:01.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.467 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.467 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:01.467 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3473922: Wed Apr 17 10:17:34 2024 00:21:01.467 read: IOPS=25, BW=98.9KiB/s (101kB/s)(340KiB/3438msec) 00:21:01.467 slat (usec): min=9, max=13700, avg=420.63, stdev=2141.62 00:21:01.467 clat (usec): min=474, max=42225, avg=39763.85, stdev=7554.29 00:21:01.467 lat (usec): min=499, max=55020, avg=40050.32, stdev=7820.15 00:21:01.467 clat percentiles (usec): 00:21:01.467 | 1.00th=[ 474], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:21:01.467 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:21:01.467 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:21:01.467 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:21:01.467 | 99.99th=[42206] 00:21:01.467 bw ( KiB/s): min= 96, max= 104, per=1.37%, avg=97.83, stdev= 3.25, samples=6 00:21:01.467 iops : min= 24, max= 26, avg=24.33, stdev= 0.82, samples=6 00:21:01.467 lat (usec) : 500=1.16%, 750=2.33% 00:21:01.467 lat (msec) : 50=95.35% 00:21:01.467 cpu : usr=0.00%, sys=0.09%, ctx=89, majf=0, minf=1 00:21:01.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:01.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.467 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.467 issued rwts: total=86,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:01.467 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3473923: Wed Apr 17 10:17:34 2024 00:21:01.467 read: IOPS=1981, BW=7924KiB/s (8114kB/s)(22.8MiB/2949msec) 00:21:01.467 slat (nsec): min=7635, max=35416, avg=8988.36, stdev=1607.56 00:21:01.467 clat (usec): min=303, max=41998, avg=490.38, stdev=2149.59 00:21:01.467 lat (usec): min=311, max=42023, avg=499.37, stdev=2150.38 00:21:01.467 clat percentiles (usec): 00:21:01.467 | 1.00th=[ 310], 5.00th=[ 318], 10.00th=[ 322], 20.00th=[ 326], 00:21:01.467 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 375], 60.00th=[ 408], 00:21:01.467 | 70.00th=[ 420], 80.00th=[ 429], 90.00th=[ 441], 95.00th=[ 449], 00:21:01.467 | 99.00th=[ 490], 99.50th=[ 502], 99.90th=[42206], 99.95th=[42206], 00:21:01.467 | 99.99th=[42206] 00:21:01.467 bw ( KiB/s): min= 5416, max=10328, per=100.00%, avg=9329.60, stdev=2187.80, samples=5 00:21:01.467 iops : min= 1354, max= 2582, avg=2332.40, stdev=546.95, samples=5 00:21:01.467 lat (usec) : 500=99.45%, 750=0.26% 00:21:01.467 lat (msec) : 50=0.27% 00:21:01.467 cpu : usr=0.58%, sys=2.34%, ctx=5846, majf=0, minf=1 00:21:01.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:01.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.467 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.467 issued rwts: total=5843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:01.467 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3473924: Wed Apr 17 10:17:34 2024 00:21:01.467 read: IOPS=24, BW=96.4KiB/s (98.7kB/s)(260KiB/2697msec) 00:21:01.467 slat (nsec): min=7810, max=31645, avg=14409.77, stdev=6508.55 00:21:01.467 clat (usec): min=40843, max=42037, avg=41149.99, stdev=370.52 00:21:01.467 lat (usec): min=40858, max=42046, avg=41164.27, stdev=372.76 00:21:01.467 clat percentiles (usec): 00:21:01.467 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:21:01.467 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:21:01.467 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:21:01.467 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:21:01.467 | 99.99th=[42206] 00:21:01.467 bw ( KiB/s): min= 96, max= 96, per=1.36%, avg=96.00, stdev= 0.00, samples=5 00:21:01.467 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:21:01.467 lat (msec) : 50=98.48% 00:21:01.467 cpu : usr=0.04%, sys=0.00%, ctx=66, majf=0, minf=2 00:21:01.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:01.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.467 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.467 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:01.467 00:21:01.467 Run status group 0 (all jobs): 00:21:01.467 READ: bw=7061KiB/s (7231kB/s), 96.4KiB/s-7924KiB/s (98.7kB/s-8114kB/s), io=23.7MiB (24.9MB), run=2697-3438msec 00:21:01.467 00:21:01.467 Disk stats (read/write): 00:21:01.467 nvme0n1: ios=92/0, merge=0/0, ticks=3238/0, in_queue=3238, util=96.49% 00:21:01.467 nvme0n2: ios=82/0, merge=0/0, ticks=3298/0, in_queue=3298, util=95.84% 00:21:01.467 nvme0n3: ios=5880/0, merge=0/0, ticks=3764/0, in_queue=3764, util=99.63% 00:21:01.467 nvme0n4: ios=63/0, merge=0/0, ticks=2593/0, in_queue=2593, util=96.45% 00:21:01.724 10:17:34 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:01.724 10:17:34 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:21:01.980 10:17:35 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:01.980 10:17:35 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:21:02.236 10:17:35 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:02.237 10:17:35 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:21:02.493 10:17:35 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:02.493 10:17:35 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:21:02.750 10:17:35 -- target/fio.sh@69 -- # fio_status=0 00:21:02.750 10:17:35 -- target/fio.sh@70 -- # wait 3473765 00:21:02.750 10:17:35 -- target/fio.sh@70 -- # fio_status=4 00:21:02.750 10:17:35 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:02.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:02.750 10:17:36 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:02.750 10:17:36 -- common/autotest_common.sh@1198 -- # local i=0 00:21:02.750 10:17:36 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:02.750 10:17:36 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:02.750 10:17:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:02.750 10:17:36 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:02.750 10:17:36 -- common/autotest_common.sh@1210 -- # return 0 00:21:02.750 10:17:36 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:21:02.750 10:17:36 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:21:02.750 nvmf hotplug test: fio failed as expected 00:21:02.750 10:17:36 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:03.006 10:17:36 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:21:03.006 10:17:36 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:21:03.006 10:17:36 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:21:03.006 10:17:36 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:21:03.006 10:17:36 -- target/fio.sh@91 -- # nvmftestfini 00:21:03.006 10:17:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:03.006 10:17:36 -- nvmf/common.sh@116 -- # sync 00:21:03.006 10:17:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:03.006 10:17:36 -- nvmf/common.sh@119 -- # set +e 00:21:03.006 10:17:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:03.006 10:17:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:03.006 rmmod nvme_tcp 00:21:03.006 rmmod nvme_fabrics 00:21:03.263 rmmod nvme_keyring 00:21:03.263 10:17:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:03.263 10:17:36 -- nvmf/common.sh@123 -- # set -e 00:21:03.263 10:17:36 -- nvmf/common.sh@124 -- # return 0 00:21:03.263 10:17:36 -- nvmf/common.sh@477 -- # '[' -n 3470400 ']' 00:21:03.263 10:17:36 -- nvmf/common.sh@478 -- # killprocess 3470400 00:21:03.263 10:17:36 -- common/autotest_common.sh@926 -- # '[' -z 3470400 ']' 00:21:03.263 10:17:36 -- common/autotest_common.sh@930 -- # kill -0 3470400 00:21:03.263 10:17:36 -- common/autotest_common.sh@931 -- # uname 00:21:03.263 10:17:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:03.263 10:17:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3470400 00:21:03.263 10:17:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:03.263 10:17:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:03.263 10:17:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3470400' 00:21:03.263 killing process with pid 3470400 00:21:03.263 10:17:36 -- common/autotest_common.sh@945 -- # kill 3470400 00:21:03.263 10:17:36 -- common/autotest_common.sh@950 -- # wait 3470400 00:21:03.521 10:17:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:03.521 10:17:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:03.521 10:17:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:03.521 10:17:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:03.521 10:17:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:03.521 10:17:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.521 10:17:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:03.521 10:17:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.419 10:17:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:05.419 00:21:05.419 real 0m28.810s 00:21:05.419 user 2m26.380s 00:21:05.419 sys 0m8.096s 00:21:05.419 10:17:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:05.419 10:17:38 -- common/autotest_common.sh@10 -- # set +x 00:21:05.419 ************************************ 00:21:05.419 END TEST nvmf_fio_target 00:21:05.419 ************************************ 00:21:05.677 10:17:38 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:05.677 10:17:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:05.677 10:17:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:05.677 10:17:38 -- common/autotest_common.sh@10 -- # set +x 00:21:05.677 ************************************ 00:21:05.677 START TEST nvmf_bdevio 00:21:05.677 ************************************ 00:21:05.677 10:17:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:05.677 * Looking for test storage... 00:21:05.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:05.677 10:17:38 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:05.677 10:17:38 -- nvmf/common.sh@7 -- # uname -s 00:21:05.677 10:17:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.677 10:17:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.677 10:17:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.677 10:17:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.677 10:17:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.677 10:17:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.677 10:17:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.677 10:17:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.677 10:17:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.677 10:17:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.677 10:17:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:05.677 10:17:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:21:05.677 10:17:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.677 10:17:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.677 10:17:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:05.677 10:17:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:05.677 10:17:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.677 10:17:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.677 10:17:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.677 10:17:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.677 10:17:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.678 10:17:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.678 10:17:38 -- paths/export.sh@5 -- # export PATH 00:21:05.678 10:17:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.678 10:17:38 -- nvmf/common.sh@46 -- # : 0 00:21:05.678 10:17:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:05.678 10:17:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:05.678 10:17:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:05.678 10:17:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.678 10:17:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.678 10:17:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:05.678 10:17:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:05.678 10:17:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:05.678 10:17:38 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:05.678 10:17:38 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:05.678 10:17:38 -- target/bdevio.sh@14 -- # nvmftestinit 00:21:05.678 10:17:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:05.678 10:17:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.678 10:17:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:05.678 10:17:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:05.678 10:17:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:05.678 10:17:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.678 10:17:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.678 10:17:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.678 10:17:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:05.678 10:17:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:05.678 10:17:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:05.678 10:17:38 -- common/autotest_common.sh@10 -- # set +x 00:21:12.228 10:17:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:12.228 10:17:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:12.228 10:17:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:12.228 10:17:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:12.228 10:17:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:12.228 10:17:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:12.228 10:17:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:12.228 10:17:44 -- nvmf/common.sh@294 -- # net_devs=() 00:21:12.228 10:17:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:12.228 10:17:44 -- nvmf/common.sh@295 -- # e810=() 00:21:12.228 10:17:44 -- nvmf/common.sh@295 -- # local -ga e810 00:21:12.228 10:17:44 -- nvmf/common.sh@296 -- # x722=() 00:21:12.228 10:17:44 -- nvmf/common.sh@296 -- # local -ga x722 00:21:12.228 10:17:44 -- nvmf/common.sh@297 -- # mlx=() 00:21:12.228 10:17:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:12.228 10:17:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:12.228 10:17:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:12.228 10:17:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:12.228 10:17:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:12.228 10:17:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:12.228 10:17:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:12.228 10:17:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:12.228 10:17:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:12.228 10:17:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:12.228 10:17:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:12.228 10:17:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:12.228 10:17:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:12.228 10:17:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:12.228 10:17:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:12.228 10:17:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:12.228 10:17:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:12.228 10:17:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:12.228 10:17:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:12.228 10:17:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:12.228 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:12.228 10:17:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:12.228 10:17:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:12.228 10:17:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.228 10:17:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.228 10:17:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:12.228 10:17:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:12.228 10:17:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:12.228 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:12.228 10:17:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:12.228 10:17:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:12.228 10:17:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.228 10:17:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.228 10:17:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:12.228 10:17:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:12.228 10:17:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:12.228 10:17:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:12.228 10:17:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:12.228 10:17:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.228 10:17:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:12.228 10:17:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.228 10:17:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:12.228 Found net devices under 0000:af:00.0: cvl_0_0 00:21:12.228 10:17:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.228 10:17:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:12.228 10:17:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.228 10:17:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:12.228 10:17:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.228 10:17:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:12.228 Found net devices under 0000:af:00.1: cvl_0_1 00:21:12.228 10:17:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.228 10:17:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:12.228 10:17:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:12.228 10:17:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:12.228 10:17:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:12.228 10:17:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:12.228 10:17:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.228 10:17:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:12.228 10:17:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:12.228 10:17:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:12.228 10:17:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:12.228 10:17:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:12.228 10:17:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:12.228 10:17:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:12.228 10:17:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.228 10:17:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:12.228 10:17:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:12.228 10:17:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:12.228 10:17:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:12.228 10:17:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:12.228 10:17:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:12.228 10:17:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:12.228 10:17:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:12.228 10:17:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:12.228 10:17:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:12.228 10:17:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:12.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:21:12.228 00:21:12.228 --- 10.0.0.2 ping statistics --- 00:21:12.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.228 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:21:12.229 10:17:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:12.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:21:12.229 00:21:12.229 --- 10.0.0.1 ping statistics --- 00:21:12.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.229 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:21:12.229 10:17:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.229 10:17:44 -- nvmf/common.sh@410 -- # return 0 00:21:12.229 10:17:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:12.229 10:17:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.229 10:17:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:12.229 10:17:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:12.229 10:17:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.229 10:17:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:12.229 10:17:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:12.229 10:17:44 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:12.229 10:17:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:12.229 10:17:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:12.229 10:17:44 -- common/autotest_common.sh@10 -- # set +x 00:21:12.229 10:17:44 -- nvmf/common.sh@469 -- # nvmfpid=3478491 00:21:12.229 10:17:44 -- nvmf/common.sh@470 -- # waitforlisten 3478491 00:21:12.229 10:17:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:21:12.229 10:17:44 -- common/autotest_common.sh@819 -- # '[' -z 3478491 ']' 00:21:12.229 10:17:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.229 10:17:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:12.229 10:17:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.229 10:17:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:12.229 10:17:44 -- common/autotest_common.sh@10 -- # set +x 00:21:12.229 [2024-04-17 10:17:44.666071] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:12.229 [2024-04-17 10:17:44.666123] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.229 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.229 [2024-04-17 10:17:44.749264] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:12.229 [2024-04-17 10:17:44.835787] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:12.229 [2024-04-17 10:17:44.835929] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.229 [2024-04-17 10:17:44.835941] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.229 [2024-04-17 10:17:44.835950] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.229 [2024-04-17 10:17:44.836067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:12.229 [2024-04-17 10:17:44.836178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:12.229 [2024-04-17 10:17:44.836287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:12.229 [2024-04-17 10:17:44.836287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:12.486 10:17:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:12.486 10:17:45 -- common/autotest_common.sh@852 -- # return 0 00:21:12.486 10:17:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:12.486 10:17:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:12.486 10:17:45 -- common/autotest_common.sh@10 -- # set +x 00:21:12.486 10:17:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.486 10:17:45 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:12.486 10:17:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.486 10:17:45 -- common/autotest_common.sh@10 -- # set +x 00:21:12.486 [2024-04-17 10:17:45.646459] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.486 10:17:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:12.486 10:17:45 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:12.486 10:17:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.486 10:17:45 -- common/autotest_common.sh@10 -- # set +x 00:21:12.486 Malloc0 00:21:12.486 10:17:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:12.486 10:17:45 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:12.486 10:17:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.486 10:17:45 -- common/autotest_common.sh@10 -- # set +x 00:21:12.486 10:17:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:12.486 10:17:45 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:12.486 10:17:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.486 10:17:45 -- common/autotest_common.sh@10 -- # set +x 00:21:12.486 10:17:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:12.486 10:17:45 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:12.486 10:17:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:12.486 10:17:45 -- common/autotest_common.sh@10 -- # set +x 00:21:12.486 [2024-04-17 10:17:45.702109] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.486 10:17:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:12.486 10:17:45 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:21:12.486 10:17:45 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:12.486 10:17:45 -- nvmf/common.sh@520 -- # config=() 00:21:12.486 10:17:45 -- nvmf/common.sh@520 -- # local subsystem config 00:21:12.486 10:17:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:12.486 10:17:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:12.486 { 00:21:12.486 "params": { 00:21:12.486 "name": "Nvme$subsystem", 00:21:12.486 "trtype": "$TEST_TRANSPORT", 00:21:12.486 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.486 "adrfam": "ipv4", 00:21:12.486 "trsvcid": "$NVMF_PORT", 00:21:12.486 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.486 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.486 "hdgst": ${hdgst:-false}, 00:21:12.486 "ddgst": ${ddgst:-false} 00:21:12.486 }, 00:21:12.486 "method": "bdev_nvme_attach_controller" 00:21:12.486 } 00:21:12.486 EOF 00:21:12.486 )") 00:21:12.486 10:17:45 -- nvmf/common.sh@542 -- # cat 00:21:12.486 10:17:45 -- nvmf/common.sh@544 -- # jq . 00:21:12.486 10:17:45 -- nvmf/common.sh@545 -- # IFS=, 00:21:12.486 10:17:45 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:12.486 "params": { 00:21:12.486 "name": "Nvme1", 00:21:12.486 "trtype": "tcp", 00:21:12.486 "traddr": "10.0.0.2", 00:21:12.486 "adrfam": "ipv4", 00:21:12.486 "trsvcid": "4420", 00:21:12.486 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.486 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:12.486 "hdgst": false, 00:21:12.486 "ddgst": false 00:21:12.486 }, 00:21:12.486 "method": "bdev_nvme_attach_controller" 00:21:12.486 }' 00:21:12.486 [2024-04-17 10:17:45.749522] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:12.486 [2024-04-17 10:17:45.749578] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3478775 ] 00:21:12.486 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.744 [2024-04-17 10:17:45.832578] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:12.744 [2024-04-17 10:17:45.918226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.744 [2024-04-17 10:17:45.918247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.744 [2024-04-17 10:17:45.918250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.001 [2024-04-17 10:17:46.192850] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:21:13.001 [2024-04-17 10:17:46.192890] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:21:13.001 I/O targets: 00:21:13.001 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:13.001 00:21:13.001 00:21:13.001 CUnit - A unit testing framework for C - Version 2.1-3 00:21:13.001 http://cunit.sourceforge.net/ 00:21:13.001 00:21:13.001 00:21:13.001 Suite: bdevio tests on: Nvme1n1 00:21:13.001 Test: blockdev write read block ...passed 00:21:13.001 Test: blockdev write zeroes read block ...passed 00:21:13.001 Test: blockdev write zeroes read no split ...passed 00:21:13.258 Test: blockdev write zeroes read split ...passed 00:21:13.258 Test: blockdev write zeroes read split partial ...passed 00:21:13.258 Test: blockdev reset ...[2024-04-17 10:17:46.394580] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:13.258 [2024-04-17 10:17:46.394651] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186bad0 (9): Bad file descriptor 00:21:13.258 [2024-04-17 10:17:46.450602] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:13.258 passed 00:21:13.258 Test: blockdev write read 8 blocks ...passed 00:21:13.258 Test: blockdev write read size > 128k ...passed 00:21:13.258 Test: blockdev write read invalid size ...passed 00:21:13.258 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:13.258 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:13.258 Test: blockdev write read max offset ...passed 00:21:13.515 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:13.515 Test: blockdev writev readv 8 blocks ...passed 00:21:13.515 Test: blockdev writev readv 30 x 1block ...passed 00:21:13.515 Test: blockdev writev readv block ...passed 00:21:13.515 Test: blockdev writev readv size > 128k ...passed 00:21:13.515 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:13.515 Test: blockdev comparev and writev ...[2024-04-17 10:17:46.663343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:13.515 [2024-04-17 10:17:46.663370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:13.515 [2024-04-17 10:17:46.663382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:13.515 [2024-04-17 10:17:46.663389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:13.515 [2024-04-17 10:17:46.663721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:13.515 [2024-04-17 10:17:46.663734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:13.515 [2024-04-17 10:17:46.663745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:13.515 [2024-04-17 10:17:46.663751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:13.515 [2024-04-17 10:17:46.664079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:13.515 [2024-04-17 10:17:46.664089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:13.515 [2024-04-17 10:17:46.664099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:13.515 [2024-04-17 10:17:46.664106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:13.515 [2024-04-17 10:17:46.664451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:13.515 [2024-04-17 10:17:46.664461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:13.515 [2024-04-17 10:17:46.664471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:13.516 [2024-04-17 10:17:46.664477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:13.516 passed 00:21:13.516 Test: blockdev nvme passthru rw ...passed 00:21:13.516 Test: blockdev nvme passthru vendor specific ...[2024-04-17 10:17:46.747247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:13.516 [2024-04-17 10:17:46.747264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:13.516 [2024-04-17 10:17:46.747422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:13.516 [2024-04-17 10:17:46.747431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:13.516 [2024-04-17 10:17:46.747585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:13.516 [2024-04-17 10:17:46.747593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:13.516 [2024-04-17 10:17:46.747750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:13.516 [2024-04-17 10:17:46.747759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:13.516 passed 00:21:13.516 Test: blockdev nvme admin passthru ...passed 00:21:13.516 Test: blockdev copy ...passed 00:21:13.516 00:21:13.516 Run Summary: Type Total Ran Passed Failed Inactive 00:21:13.516 suites 1 1 n/a 0 0 00:21:13.516 tests 23 23 23 0 0 00:21:13.516 asserts 152 152 152 0 n/a 00:21:13.516 00:21:13.516 Elapsed time = 1.227 seconds 00:21:13.773 10:17:47 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:13.773 10:17:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.773 10:17:47 -- common/autotest_common.sh@10 -- # set +x 00:21:13.773 10:17:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.773 10:17:47 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:13.773 10:17:47 -- target/bdevio.sh@30 -- # nvmftestfini 00:21:13.773 10:17:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:13.773 10:17:47 -- nvmf/common.sh@116 -- # sync 00:21:13.773 10:17:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:13.773 10:17:47 -- nvmf/common.sh@119 -- # set +e 00:21:13.773 10:17:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:13.773 10:17:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:13.773 rmmod nvme_tcp 00:21:13.773 rmmod nvme_fabrics 00:21:13.773 rmmod nvme_keyring 00:21:13.773 10:17:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:13.773 10:17:47 -- nvmf/common.sh@123 -- # set -e 00:21:13.773 10:17:47 -- nvmf/common.sh@124 -- # return 0 00:21:13.773 10:17:47 -- nvmf/common.sh@477 -- # '[' -n 3478491 ']' 00:21:13.773 10:17:47 -- nvmf/common.sh@478 -- # killprocess 3478491 00:21:13.773 10:17:47 -- common/autotest_common.sh@926 -- # '[' -z 3478491 ']' 00:21:13.773 10:17:47 -- common/autotest_common.sh@930 -- # kill -0 3478491 00:21:13.773 10:17:47 -- common/autotest_common.sh@931 -- # uname 00:21:13.773 10:17:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:13.773 10:17:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3478491 00:21:14.031 10:17:47 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:21:14.031 10:17:47 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:21:14.031 10:17:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3478491' 00:21:14.031 killing process with pid 3478491 00:21:14.031 10:17:47 -- common/autotest_common.sh@945 -- # kill 3478491 00:21:14.031 10:17:47 -- common/autotest_common.sh@950 -- # wait 3478491 00:21:14.289 10:17:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:14.289 10:17:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:14.289 10:17:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:14.289 10:17:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:14.289 10:17:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:14.289 10:17:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.289 10:17:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:14.289 10:17:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.188 10:17:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:16.188 00:21:16.188 real 0m10.683s 00:21:16.188 user 0m14.196s 00:21:16.188 sys 0m4.905s 00:21:16.188 10:17:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:16.188 10:17:49 -- common/autotest_common.sh@10 -- # set +x 00:21:16.188 ************************************ 00:21:16.188 END TEST nvmf_bdevio 00:21:16.188 ************************************ 00:21:16.188 10:17:49 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:21:16.188 10:17:49 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:16.188 10:17:49 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:21:16.188 10:17:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:16.188 10:17:49 -- common/autotest_common.sh@10 -- # set +x 00:21:16.188 ************************************ 00:21:16.188 START TEST nvmf_bdevio_no_huge 00:21:16.188 ************************************ 00:21:16.188 10:17:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:16.445 * Looking for test storage... 00:21:16.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:16.445 10:17:49 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.445 10:17:49 -- nvmf/common.sh@7 -- # uname -s 00:21:16.445 10:17:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.445 10:17:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.445 10:17:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.445 10:17:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.445 10:17:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.445 10:17:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.445 10:17:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.445 10:17:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.445 10:17:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.445 10:17:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.445 10:17:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:16.445 10:17:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:21:16.445 10:17:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.445 10:17:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.445 10:17:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.445 10:17:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:16.445 10:17:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.445 10:17:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.445 10:17:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.446 10:17:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.446 10:17:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.446 10:17:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.446 10:17:49 -- paths/export.sh@5 -- # export PATH 00:21:16.446 10:17:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.446 10:17:49 -- nvmf/common.sh@46 -- # : 0 00:21:16.446 10:17:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:16.446 10:17:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:16.446 10:17:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:16.446 10:17:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.446 10:17:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.446 10:17:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:16.446 10:17:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:16.446 10:17:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:16.446 10:17:49 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:16.446 10:17:49 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:16.446 10:17:49 -- target/bdevio.sh@14 -- # nvmftestinit 00:21:16.446 10:17:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:16.446 10:17:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.446 10:17:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:16.446 10:17:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:16.446 10:17:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:16.446 10:17:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.446 10:17:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:16.446 10:17:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.446 10:17:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:16.446 10:17:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:16.446 10:17:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:16.446 10:17:49 -- common/autotest_common.sh@10 -- # set +x 00:21:21.771 10:17:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:21.771 10:17:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:21.771 10:17:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:21.771 10:17:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:21.771 10:17:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:21.771 10:17:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:21.771 10:17:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:21.771 10:17:55 -- nvmf/common.sh@294 -- # net_devs=() 00:21:21.771 10:17:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:21.771 10:17:55 -- nvmf/common.sh@295 -- # e810=() 00:21:21.771 10:17:55 -- nvmf/common.sh@295 -- # local -ga e810 00:21:21.771 10:17:55 -- nvmf/common.sh@296 -- # x722=() 00:21:21.771 10:17:55 -- nvmf/common.sh@296 -- # local -ga x722 00:21:21.771 10:17:55 -- nvmf/common.sh@297 -- # mlx=() 00:21:21.771 10:17:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:21.771 10:17:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:21.771 10:17:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:21.771 10:17:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:21.771 10:17:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:21.771 10:17:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:21.771 10:17:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:21.772 10:17:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:21.772 10:17:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:21.772 10:17:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:21.772 10:17:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:21.772 10:17:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:21.772 10:17:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:21.772 10:17:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:21.772 10:17:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:21.772 10:17:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:21.772 10:17:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:21.772 10:17:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:21.772 10:17:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:21.772 10:17:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:21.772 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:21.772 10:17:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:21.772 10:17:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:21.772 10:17:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.772 10:17:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.772 10:17:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:21.772 10:17:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:21.772 10:17:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:21.772 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:21.772 10:17:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:21.772 10:17:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:21.772 10:17:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.772 10:17:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.772 10:17:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:21.772 10:17:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:21.772 10:17:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:21.772 10:17:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:21.772 10:17:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:21.772 10:17:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.772 10:17:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:21.772 10:17:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.772 10:17:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:21.772 Found net devices under 0000:af:00.0: cvl_0_0 00:21:21.772 10:17:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.772 10:17:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:21.772 10:17:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.772 10:17:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:21.772 10:17:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.772 10:17:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:21.772 Found net devices under 0000:af:00.1: cvl_0_1 00:21:21.772 10:17:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.772 10:17:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:21.772 10:17:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:21.772 10:17:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:21.772 10:17:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:21.772 10:17:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:21.772 10:17:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:21.772 10:17:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:21.772 10:17:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:21.772 10:17:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:21.772 10:17:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:21.772 10:17:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:21.772 10:17:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:21.772 10:17:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:21.772 10:17:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:21.772 10:17:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:21.772 10:17:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:21.772 10:17:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:21.772 10:17:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:22.041 10:17:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:22.041 10:17:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:22.041 10:17:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:22.041 10:17:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:22.041 10:17:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:22.041 10:17:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:22.041 10:17:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:22.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:22.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:21:22.041 00:21:22.041 --- 10.0.0.2 ping statistics --- 00:21:22.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.041 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:21:22.041 10:17:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:22.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:22.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:21:22.042 00:21:22.042 --- 10.0.0.1 ping statistics --- 00:21:22.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.042 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:21:22.042 10:17:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:22.042 10:17:55 -- nvmf/common.sh@410 -- # return 0 00:21:22.042 10:17:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:22.042 10:17:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:22.042 10:17:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:22.042 10:17:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:22.042 10:17:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:22.042 10:17:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:22.042 10:17:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:22.042 10:17:55 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:22.042 10:17:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:22.042 10:17:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:22.042 10:17:55 -- common/autotest_common.sh@10 -- # set +x 00:21:22.042 10:17:55 -- nvmf/common.sh@469 -- # nvmfpid=3482538 00:21:22.042 10:17:55 -- nvmf/common.sh@470 -- # waitforlisten 3482538 00:21:22.042 10:17:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:22.042 10:17:55 -- common/autotest_common.sh@819 -- # '[' -z 3482538 ']' 00:21:22.042 10:17:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.042 10:17:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:22.042 10:17:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.042 10:17:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:22.042 10:17:55 -- common/autotest_common.sh@10 -- # set +x 00:21:22.042 [2024-04-17 10:17:55.364494] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:22.042 [2024-04-17 10:17:55.364550] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:22.303 [2024-04-17 10:17:55.459933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:22.303 [2024-04-17 10:17:55.573591] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:22.303 [2024-04-17 10:17:55.573742] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.303 [2024-04-17 10:17:55.573753] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.303 [2024-04-17 10:17:55.573762] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.303 [2024-04-17 10:17:55.573881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:22.303 [2024-04-17 10:17:55.573992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:22.303 [2024-04-17 10:17:55.574106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:22.303 [2024-04-17 10:17:55.574106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:23.236 10:17:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:23.236 10:17:56 -- common/autotest_common.sh@852 -- # return 0 00:21:23.236 10:17:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:23.236 10:17:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:23.236 10:17:56 -- common/autotest_common.sh@10 -- # set +x 00:21:23.236 10:17:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.236 10:17:56 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:23.236 10:17:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:23.236 10:17:56 -- common/autotest_common.sh@10 -- # set +x 00:21:23.236 [2024-04-17 10:17:56.344226] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.236 10:17:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:23.236 10:17:56 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:23.236 10:17:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:23.236 10:17:56 -- common/autotest_common.sh@10 -- # set +x 00:21:23.236 Malloc0 00:21:23.236 10:17:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:23.236 10:17:56 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:23.236 10:17:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:23.236 10:17:56 -- common/autotest_common.sh@10 -- # set +x 00:21:23.236 10:17:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:23.236 10:17:56 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:23.236 10:17:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:23.236 10:17:56 -- common/autotest_common.sh@10 -- # set +x 00:21:23.236 10:17:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:23.236 10:17:56 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:23.236 10:17:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:23.236 10:17:56 -- common/autotest_common.sh@10 -- # set +x 00:21:23.236 [2024-04-17 10:17:56.390747] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.236 10:17:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:23.236 10:17:56 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:23.236 10:17:56 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:23.236 10:17:56 -- nvmf/common.sh@520 -- # config=() 00:21:23.236 10:17:56 -- nvmf/common.sh@520 -- # local subsystem config 00:21:23.236 10:17:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:23.236 10:17:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:23.236 { 00:21:23.236 "params": { 00:21:23.236 "name": "Nvme$subsystem", 00:21:23.236 "trtype": "$TEST_TRANSPORT", 00:21:23.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.236 "adrfam": "ipv4", 00:21:23.236 "trsvcid": "$NVMF_PORT", 00:21:23.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.236 "hdgst": ${hdgst:-false}, 00:21:23.236 "ddgst": ${ddgst:-false} 00:21:23.236 }, 00:21:23.236 "method": "bdev_nvme_attach_controller" 00:21:23.236 } 00:21:23.236 EOF 00:21:23.236 )") 00:21:23.236 10:17:56 -- nvmf/common.sh@542 -- # cat 00:21:23.236 10:17:56 -- nvmf/common.sh@544 -- # jq . 00:21:23.236 10:17:56 -- nvmf/common.sh@545 -- # IFS=, 00:21:23.236 10:17:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:23.236 "params": { 00:21:23.236 "name": "Nvme1", 00:21:23.236 "trtype": "tcp", 00:21:23.236 "traddr": "10.0.0.2", 00:21:23.236 "adrfam": "ipv4", 00:21:23.236 "trsvcid": "4420", 00:21:23.236 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.236 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:23.236 "hdgst": false, 00:21:23.236 "ddgst": false 00:21:23.236 }, 00:21:23.236 "method": "bdev_nvme_attach_controller" 00:21:23.236 }' 00:21:23.236 [2024-04-17 10:17:56.440754] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:23.236 [2024-04-17 10:17:56.440811] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3482821 ] 00:21:23.236 [2024-04-17 10:17:56.526385] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:23.494 [2024-04-17 10:17:56.641909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.494 [2024-04-17 10:17:56.642009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.494 [2024-04-17 10:17:56.642009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.751 [2024-04-17 10:17:56.840040] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:21:23.751 [2024-04-17 10:17:56.840076] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:21:23.751 I/O targets: 00:21:23.751 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:23.751 00:21:23.751 00:21:23.751 CUnit - A unit testing framework for C - Version 2.1-3 00:21:23.751 http://cunit.sourceforge.net/ 00:21:23.751 00:21:23.751 00:21:23.751 Suite: bdevio tests on: Nvme1n1 00:21:23.751 Test: blockdev write read block ...passed 00:21:23.751 Test: blockdev write zeroes read block ...passed 00:21:23.751 Test: blockdev write zeroes read no split ...passed 00:21:23.751 Test: blockdev write zeroes read split ...passed 00:21:23.751 Test: blockdev write zeroes read split partial ...passed 00:21:23.751 Test: blockdev reset ...[2024-04-17 10:17:56.968256] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.751 [2024-04-17 10:17:56.968319] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e2070 (9): Bad file descriptor 00:21:23.751 [2024-04-17 10:17:56.998146] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:23.751 passed 00:21:23.751 Test: blockdev write read 8 blocks ...passed 00:21:23.751 Test: blockdev write read size > 128k ...passed 00:21:23.751 Test: blockdev write read invalid size ...passed 00:21:23.751 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:23.751 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:23.751 Test: blockdev write read max offset ...passed 00:21:24.008 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:24.008 Test: blockdev writev readv 8 blocks ...passed 00:21:24.008 Test: blockdev writev readv 30 x 1block ...passed 00:21:24.008 Test: blockdev writev readv block ...passed 00:21:24.008 Test: blockdev writev readv size > 128k ...passed 00:21:24.008 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:24.008 Test: blockdev comparev and writev ...[2024-04-17 10:17:57.210263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.008 [2024-04-17 10:17:57.210292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.008 [2024-04-17 10:17:57.210304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.008 [2024-04-17 10:17:57.210311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:24.008 [2024-04-17 10:17:57.210636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.008 [2024-04-17 10:17:57.210652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:24.008 [2024-04-17 10:17:57.210663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.008 [2024-04-17 10:17:57.210670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:24.008 [2024-04-17 10:17:57.211022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.008 [2024-04-17 10:17:57.211034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:24.008 [2024-04-17 10:17:57.211044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.008 [2024-04-17 10:17:57.211051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:24.008 [2024-04-17 10:17:57.211363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.008 [2024-04-17 10:17:57.211374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:24.008 [2024-04-17 10:17:57.211384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:24.008 [2024-04-17 10:17:57.211391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:24.008 passed 00:21:24.008 Test: blockdev nvme passthru rw ...passed 00:21:24.008 Test: blockdev nvme passthru vendor specific ...[2024-04-17 10:17:57.293036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.008 [2024-04-17 10:17:57.293053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:24.008 [2024-04-17 10:17:57.293202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.008 [2024-04-17 10:17:57.293211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:24.008 [2024-04-17 10:17:57.293369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.009 [2024-04-17 10:17:57.293378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:24.009 [2024-04-17 10:17:57.293533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.009 [2024-04-17 10:17:57.293542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:24.009 passed 00:21:24.009 Test: blockdev nvme admin passthru ...passed 00:21:24.265 Test: blockdev copy ...passed 00:21:24.265 00:21:24.265 Run Summary: Type Total Ran Passed Failed Inactive 00:21:24.265 suites 1 1 n/a 0 0 00:21:24.265 tests 23 23 23 0 0 00:21:24.265 asserts 152 152 152 0 n/a 00:21:24.265 00:21:24.265 Elapsed time = 1.023 seconds 00:21:24.523 10:17:57 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:24.523 10:17:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:24.523 10:17:57 -- common/autotest_common.sh@10 -- # set +x 00:21:24.523 10:17:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:24.523 10:17:57 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:24.523 10:17:57 -- target/bdevio.sh@30 -- # nvmftestfini 00:21:24.523 10:17:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:24.523 10:17:57 -- nvmf/common.sh@116 -- # sync 00:21:24.523 10:17:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:24.523 10:17:57 -- nvmf/common.sh@119 -- # set +e 00:21:24.523 10:17:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:24.523 10:17:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:24.523 rmmod nvme_tcp 00:21:24.523 rmmod nvme_fabrics 00:21:24.523 rmmod nvme_keyring 00:21:24.523 10:17:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:24.523 10:17:57 -- nvmf/common.sh@123 -- # set -e 00:21:24.523 10:17:57 -- nvmf/common.sh@124 -- # return 0 00:21:24.523 10:17:57 -- nvmf/common.sh@477 -- # '[' -n 3482538 ']' 00:21:24.523 10:17:57 -- nvmf/common.sh@478 -- # killprocess 3482538 00:21:24.523 10:17:57 -- common/autotest_common.sh@926 -- # '[' -z 3482538 ']' 00:21:24.523 10:17:57 -- common/autotest_common.sh@930 -- # kill -0 3482538 00:21:24.523 10:17:57 -- common/autotest_common.sh@931 -- # uname 00:21:24.523 10:17:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:24.523 10:17:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3482538 00:21:24.523 10:17:57 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:21:24.523 10:17:57 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:21:24.523 10:17:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3482538' 00:21:24.523 killing process with pid 3482538 00:21:24.523 10:17:57 -- common/autotest_common.sh@945 -- # kill 3482538 00:21:24.523 10:17:57 -- common/autotest_common.sh@950 -- # wait 3482538 00:21:25.088 10:17:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:25.088 10:17:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:25.088 10:17:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:25.088 10:17:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:25.088 10:17:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:25.088 10:17:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.088 10:17:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:25.088 10:17:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.617 10:18:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:27.617 00:21:27.617 real 0m10.857s 00:21:27.617 user 0m14.212s 00:21:27.617 sys 0m5.333s 00:21:27.617 10:18:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:27.617 10:18:00 -- common/autotest_common.sh@10 -- # set +x 00:21:27.617 ************************************ 00:21:27.617 END TEST nvmf_bdevio_no_huge 00:21:27.617 ************************************ 00:21:27.617 10:18:00 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:27.617 10:18:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:27.617 10:18:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:27.617 10:18:00 -- common/autotest_common.sh@10 -- # set +x 00:21:27.617 ************************************ 00:21:27.617 START TEST nvmf_tls 00:21:27.617 ************************************ 00:21:27.617 10:18:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:27.617 * Looking for test storage... 00:21:27.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:27.617 10:18:00 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.617 10:18:00 -- nvmf/common.sh@7 -- # uname -s 00:21:27.617 10:18:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.617 10:18:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.617 10:18:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.617 10:18:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.617 10:18:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.617 10:18:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.617 10:18:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.617 10:18:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.617 10:18:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.617 10:18:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.617 10:18:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:27.617 10:18:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:21:27.617 10:18:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.617 10:18:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.617 10:18:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.617 10:18:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:27.617 10:18:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.617 10:18:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.617 10:18:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.617 10:18:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.617 10:18:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.617 10:18:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.617 10:18:00 -- paths/export.sh@5 -- # export PATH 00:21:27.617 10:18:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.617 10:18:00 -- nvmf/common.sh@46 -- # : 0 00:21:27.617 10:18:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:27.617 10:18:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:27.617 10:18:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:27.617 10:18:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.617 10:18:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.617 10:18:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:27.617 10:18:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:27.617 10:18:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:27.617 10:18:00 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:27.617 10:18:00 -- target/tls.sh@71 -- # nvmftestinit 00:21:27.617 10:18:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:27.617 10:18:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.617 10:18:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:27.617 10:18:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:27.617 10:18:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:27.618 10:18:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.618 10:18:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.618 10:18:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.618 10:18:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:27.618 10:18:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:27.618 10:18:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:27.618 10:18:00 -- common/autotest_common.sh@10 -- # set +x 00:21:32.879 10:18:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:32.879 10:18:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:32.879 10:18:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:32.879 10:18:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:32.879 10:18:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:32.879 10:18:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:32.879 10:18:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:32.879 10:18:05 -- nvmf/common.sh@294 -- # net_devs=() 00:21:32.879 10:18:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:32.879 10:18:05 -- nvmf/common.sh@295 -- # e810=() 00:21:32.879 10:18:05 -- nvmf/common.sh@295 -- # local -ga e810 00:21:32.879 10:18:05 -- nvmf/common.sh@296 -- # x722=() 00:21:32.879 10:18:05 -- nvmf/common.sh@296 -- # local -ga x722 00:21:32.879 10:18:05 -- nvmf/common.sh@297 -- # mlx=() 00:21:32.879 10:18:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:32.879 10:18:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:32.879 10:18:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:32.879 10:18:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:32.879 10:18:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:32.879 10:18:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:32.879 10:18:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:32.879 10:18:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:32.879 10:18:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:32.879 10:18:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:32.879 10:18:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:32.879 10:18:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:32.879 10:18:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:32.879 10:18:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:32.879 10:18:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:32.879 10:18:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:32.879 10:18:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:32.879 10:18:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:32.879 10:18:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:32.879 10:18:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:32.879 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:32.879 10:18:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:32.879 10:18:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:32.879 10:18:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.879 10:18:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.879 10:18:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:32.879 10:18:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:32.879 10:18:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:32.879 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:32.879 10:18:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:32.879 10:18:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:32.879 10:18:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.879 10:18:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.879 10:18:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:32.879 10:18:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:32.879 10:18:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:32.879 10:18:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:32.879 10:18:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:32.879 10:18:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.879 10:18:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:32.879 10:18:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.879 10:18:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:32.879 Found net devices under 0000:af:00.0: cvl_0_0 00:21:32.879 10:18:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.879 10:18:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:32.879 10:18:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.879 10:18:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:32.879 10:18:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.879 10:18:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:32.879 Found net devices under 0000:af:00.1: cvl_0_1 00:21:32.879 10:18:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.879 10:18:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:32.879 10:18:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:32.879 10:18:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:32.879 10:18:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:32.879 10:18:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:32.879 10:18:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:32.879 10:18:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:32.879 10:18:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:32.879 10:18:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:32.879 10:18:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:32.879 10:18:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:32.879 10:18:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:32.879 10:18:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:32.879 10:18:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:32.879 10:18:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:32.879 10:18:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:32.879 10:18:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:32.879 10:18:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:32.879 10:18:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:32.879 10:18:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:32.879 10:18:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:32.879 10:18:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:32.879 10:18:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:32.879 10:18:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:32.879 10:18:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:32.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:32.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:21:32.879 00:21:32.879 --- 10.0.0.2 ping statistics --- 00:21:32.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.879 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:21:32.879 10:18:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:32.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:32.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:21:32.879 00:21:32.879 --- 10.0.0.1 ping statistics --- 00:21:32.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.879 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:21:32.879 10:18:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:32.879 10:18:06 -- nvmf/common.sh@410 -- # return 0 00:21:32.879 10:18:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:32.879 10:18:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:32.879 10:18:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:32.879 10:18:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:32.879 10:18:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:32.879 10:18:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:32.879 10:18:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:33.137 10:18:06 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:33.137 10:18:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:33.137 10:18:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:33.137 10:18:06 -- common/autotest_common.sh@10 -- # set +x 00:21:33.137 10:18:06 -- nvmf/common.sh@469 -- # nvmfpid=3486894 00:21:33.137 10:18:06 -- nvmf/common.sh@470 -- # waitforlisten 3486894 00:21:33.137 10:18:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:33.137 10:18:06 -- common/autotest_common.sh@819 -- # '[' -z 3486894 ']' 00:21:33.137 10:18:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.137 10:18:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:33.137 10:18:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.137 10:18:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:33.137 10:18:06 -- common/autotest_common.sh@10 -- # set +x 00:21:33.137 [2024-04-17 10:18:06.278177] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:33.137 [2024-04-17 10:18:06.278230] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.137 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.137 [2024-04-17 10:18:06.359017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.137 [2024-04-17 10:18:06.445939] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:33.137 [2024-04-17 10:18:06.446082] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.137 [2024-04-17 10:18:06.446094] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.137 [2024-04-17 10:18:06.446108] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.137 [2024-04-17 10:18:06.446130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.137 10:18:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:33.137 10:18:06 -- common/autotest_common.sh@852 -- # return 0 00:21:33.137 10:18:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:33.137 10:18:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:33.137 10:18:06 -- common/autotest_common.sh@10 -- # set +x 00:21:33.395 10:18:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:33.395 10:18:06 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:21:33.395 10:18:06 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:33.395 true 00:21:33.395 10:18:06 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:33.395 10:18:06 -- target/tls.sh@82 -- # jq -r .tls_version 00:21:33.653 10:18:06 -- target/tls.sh@82 -- # version=0 00:21:33.653 10:18:06 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:21:33.653 10:18:06 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:33.910 10:18:07 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:33.910 10:18:07 -- target/tls.sh@90 -- # jq -r .tls_version 00:21:34.167 10:18:07 -- target/tls.sh@90 -- # version=13 00:21:34.167 10:18:07 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:21:34.167 10:18:07 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:34.424 10:18:07 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:34.424 10:18:07 -- target/tls.sh@98 -- # jq -r .tls_version 00:21:34.682 10:18:07 -- target/tls.sh@98 -- # version=7 00:21:34.682 10:18:07 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:21:34.682 10:18:07 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:34.682 10:18:07 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:21:34.940 10:18:08 -- target/tls.sh@105 -- # ktls=false 00:21:34.940 10:18:08 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:21:34.940 10:18:08 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:35.197 10:18:08 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:21:35.197 10:18:08 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:35.455 10:18:08 -- target/tls.sh@113 -- # ktls=true 00:21:35.455 10:18:08 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:21:35.455 10:18:08 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:35.713 10:18:08 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:35.713 10:18:08 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:21:35.713 10:18:09 -- target/tls.sh@121 -- # ktls=false 00:21:35.713 10:18:09 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:21:35.713 10:18:09 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:21:35.713 10:18:09 -- target/tls.sh@49 -- # local key hash crc 00:21:35.713 10:18:09 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:21:35.713 10:18:09 -- target/tls.sh@51 -- # hash=01 00:21:35.713 10:18:09 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:21:35.713 10:18:09 -- target/tls.sh@52 -- # gzip -1 -c 00:21:35.713 10:18:09 -- target/tls.sh@52 -- # tail -c8 00:21:35.713 10:18:09 -- target/tls.sh@52 -- # head -c 4 00:21:35.713 10:18:09 -- target/tls.sh@52 -- # crc='p$H�' 00:21:35.971 10:18:09 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:21:35.971 10:18:09 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:21:35.971 10:18:09 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:35.971 10:18:09 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:35.971 10:18:09 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:21:35.971 10:18:09 -- target/tls.sh@49 -- # local key hash crc 00:21:35.971 10:18:09 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:21:35.971 10:18:09 -- target/tls.sh@51 -- # hash=01 00:21:35.971 10:18:09 -- target/tls.sh@52 -- # tail -c8 00:21:35.971 10:18:09 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:21:35.971 10:18:09 -- target/tls.sh@52 -- # gzip -1 -c 00:21:35.971 10:18:09 -- target/tls.sh@52 -- # head -c 4 00:21:35.971 10:18:09 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:21:35.971 10:18:09 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:21:35.971 10:18:09 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:21:35.971 10:18:09 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:35.971 10:18:09 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:35.971 10:18:09 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:35.971 10:18:09 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:35.971 10:18:09 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:35.971 10:18:09 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:35.971 10:18:09 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:35.971 10:18:09 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:35.971 10:18:09 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:36.229 10:18:09 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:36.486 10:18:09 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:36.486 10:18:09 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:36.486 10:18:09 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:36.744 [2024-04-17 10:18:09.834419] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.744 10:18:09 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:37.001 10:18:10 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:37.001 [2024-04-17 10:18:10.307707] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:37.001 [2024-04-17 10:18:10.307941] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:37.001 10:18:10 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:37.258 malloc0 00:21:37.258 10:18:10 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:37.516 10:18:10 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:37.773 10:18:11 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:37.773 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.959 Initializing NVMe Controllers 00:21:49.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:49.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:49.959 Initialization complete. Launching workers. 00:21:49.959 ======================================================== 00:21:49.959 Latency(us) 00:21:49.959 Device Information : IOPS MiB/s Average min max 00:21:49.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11207.84 43.78 5711.23 1194.58 7241.31 00:21:49.959 ======================================================== 00:21:49.959 Total : 11207.84 43.78 5711.23 1194.58 7241.31 00:21:49.959 00:21:49.959 10:18:21 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:49.959 10:18:21 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:49.959 10:18:21 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:49.959 10:18:21 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:49.959 10:18:21 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:21:49.959 10:18:21 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:49.959 10:18:21 -- target/tls.sh@28 -- # bdevperf_pid=3490057 00:21:49.959 10:18:21 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:49.959 10:18:21 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:49.959 10:18:21 -- target/tls.sh@31 -- # waitforlisten 3490057 /var/tmp/bdevperf.sock 00:21:49.959 10:18:21 -- common/autotest_common.sh@819 -- # '[' -z 3490057 ']' 00:21:49.959 10:18:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.959 10:18:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:49.959 10:18:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.959 10:18:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:49.959 10:18:21 -- common/autotest_common.sh@10 -- # set +x 00:21:49.959 [2024-04-17 10:18:21.211875] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:49.959 [2024-04-17 10:18:21.211939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3490057 ] 00:21:49.959 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.959 [2024-04-17 10:18:21.269976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.959 [2024-04-17 10:18:21.335550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.959 10:18:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:49.959 10:18:22 -- common/autotest_common.sh@852 -- # return 0 00:21:49.959 10:18:22 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:49.959 [2024-04-17 10:18:22.367926] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:49.959 TLSTESTn1 00:21:49.959 10:18:22 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:49.959 Running I/O for 10 seconds... 00:21:59.933 00:21:59.933 Latency(us) 00:21:59.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.933 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:59.933 Verification LBA range: start 0x0 length 0x2000 00:21:59.933 TLSTESTn1 : 10.02 4476.53 17.49 0.00 0.00 28563.97 3187.43 47185.92 00:21:59.933 =================================================================================================================== 00:21:59.933 Total : 4476.53 17.49 0.00 0.00 28563.97 3187.43 47185.92 00:21:59.933 0 00:21:59.933 10:18:32 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:59.933 10:18:32 -- target/tls.sh@45 -- # killprocess 3490057 00:21:59.933 10:18:32 -- common/autotest_common.sh@926 -- # '[' -z 3490057 ']' 00:21:59.933 10:18:32 -- common/autotest_common.sh@930 -- # kill -0 3490057 00:21:59.933 10:18:32 -- common/autotest_common.sh@931 -- # uname 00:21:59.933 10:18:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:59.933 10:18:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3490057 00:21:59.933 10:18:32 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:59.933 10:18:32 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:59.933 10:18:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3490057' 00:21:59.933 killing process with pid 3490057 00:21:59.933 10:18:32 -- common/autotest_common.sh@945 -- # kill 3490057 00:21:59.933 Received shutdown signal, test time was about 10.000000 seconds 00:21:59.933 00:21:59.933 Latency(us) 00:21:59.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.933 =================================================================================================================== 00:21:59.933 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:59.933 10:18:32 -- common/autotest_common.sh@950 -- # wait 3490057 00:21:59.933 10:18:32 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:59.933 10:18:32 -- common/autotest_common.sh@640 -- # local es=0 00:21:59.933 10:18:32 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:59.933 10:18:32 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:59.933 10:18:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:59.933 10:18:32 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:59.933 10:18:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:59.934 10:18:32 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:59.934 10:18:32 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:59.934 10:18:32 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:59.934 10:18:32 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:59.934 10:18:32 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:21:59.934 10:18:32 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:59.934 10:18:32 -- target/tls.sh@28 -- # bdevperf_pid=3492152 00:21:59.934 10:18:32 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:59.934 10:18:32 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:59.934 10:18:32 -- target/tls.sh@31 -- # waitforlisten 3492152 /var/tmp/bdevperf.sock 00:21:59.934 10:18:32 -- common/autotest_common.sh@819 -- # '[' -z 3492152 ']' 00:21:59.934 10:18:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:59.934 10:18:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:59.934 10:18:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:59.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:59.934 10:18:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:59.934 10:18:32 -- common/autotest_common.sh@10 -- # set +x 00:21:59.934 [2024-04-17 10:18:32.940270] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:59.934 [2024-04-17 10:18:32.940334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3492152 ] 00:21:59.934 EAL: No free 2048 kB hugepages reported on node 1 00:21:59.934 [2024-04-17 10:18:32.997629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.934 [2024-04-17 10:18:33.059274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.868 10:18:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:00.868 10:18:33 -- common/autotest_common.sh@852 -- # return 0 00:22:00.868 10:18:33 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:22:00.868 [2024-04-17 10:18:34.091516] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:00.868 [2024-04-17 10:18:34.098060] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:00.868 [2024-04-17 10:18:34.098714] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2262600 (107): Transport endpoint is not connected 00:22:00.868 [2024-04-17 10:18:34.099707] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2262600 (9): Bad file descriptor 00:22:00.868 [2024-04-17 10:18:34.100709] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.868 [2024-04-17 10:18:34.100719] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:00.868 [2024-04-17 10:18:34.100728] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.868 request: 00:22:00.868 { 00:22:00.868 "name": "TLSTEST", 00:22:00.868 "trtype": "tcp", 00:22:00.868 "traddr": "10.0.0.2", 00:22:00.868 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:00.868 "adrfam": "ipv4", 00:22:00.868 "trsvcid": "4420", 00:22:00.868 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.868 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:22:00.868 "method": "bdev_nvme_attach_controller", 00:22:00.868 "req_id": 1 00:22:00.868 } 00:22:00.868 Got JSON-RPC error response 00:22:00.868 response: 00:22:00.868 { 00:22:00.868 "code": -32602, 00:22:00.868 "message": "Invalid parameters" 00:22:00.868 } 00:22:00.868 10:18:34 -- target/tls.sh@36 -- # killprocess 3492152 00:22:00.868 10:18:34 -- common/autotest_common.sh@926 -- # '[' -z 3492152 ']' 00:22:00.868 10:18:34 -- common/autotest_common.sh@930 -- # kill -0 3492152 00:22:00.868 10:18:34 -- common/autotest_common.sh@931 -- # uname 00:22:00.868 10:18:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:00.868 10:18:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3492152 00:22:00.868 10:18:34 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:00.868 10:18:34 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:00.868 10:18:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3492152' 00:22:00.868 killing process with pid 3492152 00:22:00.868 10:18:34 -- common/autotest_common.sh@945 -- # kill 3492152 00:22:00.868 Received shutdown signal, test time was about 10.000000 seconds 00:22:00.868 00:22:00.869 Latency(us) 00:22:00.869 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.869 =================================================================================================================== 00:22:00.869 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:00.869 10:18:34 -- common/autotest_common.sh@950 -- # wait 3492152 00:22:01.128 10:18:34 -- target/tls.sh@37 -- # return 1 00:22:01.128 10:18:34 -- common/autotest_common.sh@643 -- # es=1 00:22:01.128 10:18:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:01.128 10:18:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:01.128 10:18:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:01.128 10:18:34 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:01.128 10:18:34 -- common/autotest_common.sh@640 -- # local es=0 00:22:01.128 10:18:34 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:01.128 10:18:34 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:22:01.128 10:18:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:01.128 10:18:34 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:22:01.128 10:18:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:01.128 10:18:34 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:01.128 10:18:34 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:01.128 10:18:34 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:01.128 10:18:34 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:01.128 10:18:34 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:22:01.128 10:18:34 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:01.128 10:18:34 -- target/tls.sh@28 -- # bdevperf_pid=3492432 00:22:01.128 10:18:34 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:01.128 10:18:34 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:01.128 10:18:34 -- target/tls.sh@31 -- # waitforlisten 3492432 /var/tmp/bdevperf.sock 00:22:01.128 10:18:34 -- common/autotest_common.sh@819 -- # '[' -z 3492432 ']' 00:22:01.128 10:18:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.128 10:18:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:01.128 10:18:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.128 10:18:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:01.128 10:18:34 -- common/autotest_common.sh@10 -- # set +x 00:22:01.128 [2024-04-17 10:18:34.413670] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:01.128 [2024-04-17 10:18:34.413733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3492432 ] 00:22:01.128 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.396 [2024-04-17 10:18:34.471974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.396 [2024-04-17 10:18:34.534242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.328 10:18:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:02.328 10:18:35 -- common/autotest_common.sh@852 -- # return 0 00:22:02.328 10:18:35 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:02.328 [2024-04-17 10:18:35.578671] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:02.328 [2024-04-17 10:18:35.583068] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:02.328 [2024-04-17 10:18:35.583098] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:02.328 [2024-04-17 10:18:35.583129] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:02.328 [2024-04-17 10:18:35.583792] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151d600 (107): Transport endpoint is not connected 00:22:02.328 [2024-04-17 10:18:35.584784] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151d600 (9): Bad file descriptor 00:22:02.328 [2024-04-17 10:18:35.585785] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.328 [2024-04-17 10:18:35.585795] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:02.328 [2024-04-17 10:18:35.585803] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.328 request: 00:22:02.328 { 00:22:02.328 "name": "TLSTEST", 00:22:02.328 "trtype": "tcp", 00:22:02.328 "traddr": "10.0.0.2", 00:22:02.328 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:02.328 "adrfam": "ipv4", 00:22:02.328 "trsvcid": "4420", 00:22:02.328 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.328 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:22:02.328 "method": "bdev_nvme_attach_controller", 00:22:02.329 "req_id": 1 00:22:02.329 } 00:22:02.329 Got JSON-RPC error response 00:22:02.329 response: 00:22:02.329 { 00:22:02.329 "code": -32602, 00:22:02.329 "message": "Invalid parameters" 00:22:02.329 } 00:22:02.329 10:18:35 -- target/tls.sh@36 -- # killprocess 3492432 00:22:02.329 10:18:35 -- common/autotest_common.sh@926 -- # '[' -z 3492432 ']' 00:22:02.329 10:18:35 -- common/autotest_common.sh@930 -- # kill -0 3492432 00:22:02.329 10:18:35 -- common/autotest_common.sh@931 -- # uname 00:22:02.329 10:18:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:02.329 10:18:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3492432 00:22:02.329 10:18:35 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:02.329 10:18:35 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:02.329 10:18:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3492432' 00:22:02.329 killing process with pid 3492432 00:22:02.329 10:18:35 -- common/autotest_common.sh@945 -- # kill 3492432 00:22:02.329 Received shutdown signal, test time was about 10.000000 seconds 00:22:02.329 00:22:02.329 Latency(us) 00:22:02.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.329 =================================================================================================================== 00:22:02.329 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:02.329 10:18:35 -- common/autotest_common.sh@950 -- # wait 3492432 00:22:02.587 10:18:35 -- target/tls.sh@37 -- # return 1 00:22:02.587 10:18:35 -- common/autotest_common.sh@643 -- # es=1 00:22:02.587 10:18:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:02.587 10:18:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:02.587 10:18:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:02.587 10:18:35 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:02.587 10:18:35 -- common/autotest_common.sh@640 -- # local es=0 00:22:02.587 10:18:35 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:02.587 10:18:35 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:22:02.587 10:18:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:02.587 10:18:35 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:22:02.587 10:18:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:02.587 10:18:35 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:02.587 10:18:35 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:02.587 10:18:35 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:02.587 10:18:35 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:02.587 10:18:35 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:22:02.587 10:18:35 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:02.587 10:18:35 -- target/tls.sh@28 -- # bdevperf_pid=3492702 00:22:02.587 10:18:35 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:02.587 10:18:35 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:02.587 10:18:35 -- target/tls.sh@31 -- # waitforlisten 3492702 /var/tmp/bdevperf.sock 00:22:02.587 10:18:35 -- common/autotest_common.sh@819 -- # '[' -z 3492702 ']' 00:22:02.587 10:18:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.587 10:18:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:02.587 10:18:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.587 10:18:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:02.587 10:18:35 -- common/autotest_common.sh@10 -- # set +x 00:22:02.587 [2024-04-17 10:18:35.899009] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:02.587 [2024-04-17 10:18:35.899073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3492702 ] 00:22:02.845 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.845 [2024-04-17 10:18:35.957008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.845 [2024-04-17 10:18:36.027152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.778 10:18:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:03.778 10:18:36 -- common/autotest_common.sh@852 -- # return 0 00:22:03.778 10:18:36 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:22:03.778 [2024-04-17 10:18:37.056674] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:03.778 [2024-04-17 10:18:37.065636] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:03.778 [2024-04-17 10:18:37.065676] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:03.778 [2024-04-17 10:18:37.065707] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:03.778 [2024-04-17 10:18:37.065885] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2016600 (107): Transport endpoint is not connected 00:22:03.778 [2024-04-17 10:18:37.066877] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2016600 (9): Bad file descriptor 00:22:03.778 [2024-04-17 10:18:37.067879] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:03.778 [2024-04-17 10:18:37.067889] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:03.778 [2024-04-17 10:18:37.067896] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:03.778 request: 00:22:03.778 { 00:22:03.778 "name": "TLSTEST", 00:22:03.778 "trtype": "tcp", 00:22:03.778 "traddr": "10.0.0.2", 00:22:03.778 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:03.778 "adrfam": "ipv4", 00:22:03.778 "trsvcid": "4420", 00:22:03.778 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:03.778 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:22:03.778 "method": "bdev_nvme_attach_controller", 00:22:03.778 "req_id": 1 00:22:03.778 } 00:22:03.778 Got JSON-RPC error response 00:22:03.778 response: 00:22:03.778 { 00:22:03.778 "code": -32602, 00:22:03.778 "message": "Invalid parameters" 00:22:03.778 } 00:22:03.778 10:18:37 -- target/tls.sh@36 -- # killprocess 3492702 00:22:03.779 10:18:37 -- common/autotest_common.sh@926 -- # '[' -z 3492702 ']' 00:22:03.779 10:18:37 -- common/autotest_common.sh@930 -- # kill -0 3492702 00:22:03.779 10:18:37 -- common/autotest_common.sh@931 -- # uname 00:22:03.779 10:18:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:03.779 10:18:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3492702 00:22:04.037 10:18:37 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:04.037 10:18:37 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:04.037 10:18:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3492702' 00:22:04.037 killing process with pid 3492702 00:22:04.037 10:18:37 -- common/autotest_common.sh@945 -- # kill 3492702 00:22:04.037 Received shutdown signal, test time was about 10.000000 seconds 00:22:04.037 00:22:04.037 Latency(us) 00:22:04.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.037 =================================================================================================================== 00:22:04.037 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:04.037 10:18:37 -- common/autotest_common.sh@950 -- # wait 3492702 00:22:04.037 10:18:37 -- target/tls.sh@37 -- # return 1 00:22:04.037 10:18:37 -- common/autotest_common.sh@643 -- # es=1 00:22:04.037 10:18:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:04.037 10:18:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:04.037 10:18:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:04.037 10:18:37 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:04.037 10:18:37 -- common/autotest_common.sh@640 -- # local es=0 00:22:04.037 10:18:37 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:04.037 10:18:37 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:22:04.037 10:18:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:04.037 10:18:37 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:22:04.037 10:18:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:04.037 10:18:37 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:04.037 10:18:37 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:04.037 10:18:37 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:04.037 10:18:37 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:04.037 10:18:37 -- target/tls.sh@23 -- # psk= 00:22:04.037 10:18:37 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:04.037 10:18:37 -- target/tls.sh@28 -- # bdevperf_pid=3492982 00:22:04.037 10:18:37 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:04.037 10:18:37 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:04.037 10:18:37 -- target/tls.sh@31 -- # waitforlisten 3492982 /var/tmp/bdevperf.sock 00:22:04.037 10:18:37 -- common/autotest_common.sh@819 -- # '[' -z 3492982 ']' 00:22:04.037 10:18:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:04.037 10:18:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:04.037 10:18:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:04.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:04.037 10:18:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:04.037 10:18:37 -- common/autotest_common.sh@10 -- # set +x 00:22:04.295 [2024-04-17 10:18:37.383787] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:04.296 [2024-04-17 10:18:37.383851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3492982 ] 00:22:04.296 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.296 [2024-04-17 10:18:37.441393] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.296 [2024-04-17 10:18:37.502769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.228 10:18:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:05.228 10:18:38 -- common/autotest_common.sh@852 -- # return 0 00:22:05.228 10:18:38 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:05.228 [2024-04-17 10:18:38.529572] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:05.228 [2024-04-17 10:18:38.531430] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1882c80 (9): Bad file descriptor 00:22:05.228 [2024-04-17 10:18:38.532429] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.228 [2024-04-17 10:18:38.532439] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:05.228 [2024-04-17 10:18:38.532447] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.228 request: 00:22:05.228 { 00:22:05.228 "name": "TLSTEST", 00:22:05.228 "trtype": "tcp", 00:22:05.228 "traddr": "10.0.0.2", 00:22:05.228 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:05.228 "adrfam": "ipv4", 00:22:05.228 "trsvcid": "4420", 00:22:05.228 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.228 "method": "bdev_nvme_attach_controller", 00:22:05.228 "req_id": 1 00:22:05.228 } 00:22:05.228 Got JSON-RPC error response 00:22:05.228 response: 00:22:05.228 { 00:22:05.228 "code": -32602, 00:22:05.228 "message": "Invalid parameters" 00:22:05.228 } 00:22:05.228 10:18:38 -- target/tls.sh@36 -- # killprocess 3492982 00:22:05.228 10:18:38 -- common/autotest_common.sh@926 -- # '[' -z 3492982 ']' 00:22:05.228 10:18:38 -- common/autotest_common.sh@930 -- # kill -0 3492982 00:22:05.487 10:18:38 -- common/autotest_common.sh@931 -- # uname 00:22:05.487 10:18:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:05.487 10:18:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3492982 00:22:05.487 10:18:38 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:05.487 10:18:38 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:05.487 10:18:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3492982' 00:22:05.487 killing process with pid 3492982 00:22:05.487 10:18:38 -- common/autotest_common.sh@945 -- # kill 3492982 00:22:05.487 Received shutdown signal, test time was about 10.000000 seconds 00:22:05.487 00:22:05.487 Latency(us) 00:22:05.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.487 =================================================================================================================== 00:22:05.487 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:05.487 10:18:38 -- common/autotest_common.sh@950 -- # wait 3492982 00:22:05.487 10:18:38 -- target/tls.sh@37 -- # return 1 00:22:05.487 10:18:38 -- common/autotest_common.sh@643 -- # es=1 00:22:05.487 10:18:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:05.487 10:18:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:05.487 10:18:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:05.487 10:18:38 -- target/tls.sh@167 -- # killprocess 3486894 00:22:05.487 10:18:38 -- common/autotest_common.sh@926 -- # '[' -z 3486894 ']' 00:22:05.487 10:18:38 -- common/autotest_common.sh@930 -- # kill -0 3486894 00:22:05.487 10:18:38 -- common/autotest_common.sh@931 -- # uname 00:22:05.487 10:18:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:05.487 10:18:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3486894 00:22:05.746 10:18:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:05.746 10:18:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:05.747 10:18:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3486894' 00:22:05.747 killing process with pid 3486894 00:22:05.747 10:18:38 -- common/autotest_common.sh@945 -- # kill 3486894 00:22:05.747 10:18:38 -- common/autotest_common.sh@950 -- # wait 3486894 00:22:05.747 10:18:39 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:22:05.747 10:18:39 -- target/tls.sh@49 -- # local key hash crc 00:22:05.747 10:18:39 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:05.747 10:18:39 -- target/tls.sh@51 -- # hash=02 00:22:06.005 10:18:39 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:22:06.005 10:18:39 -- target/tls.sh@52 -- # gzip -1 -c 00:22:06.005 10:18:39 -- target/tls.sh@52 -- # tail -c8 00:22:06.005 10:18:39 -- target/tls.sh@52 -- # head -c 4 00:22:06.005 10:18:39 -- target/tls.sh@52 -- # crc='�e�'\''' 00:22:06.005 10:18:39 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:22:06.005 10:18:39 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:22:06.005 10:18:39 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:06.006 10:18:39 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:06.006 10:18:39 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:06.006 10:18:39 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:06.006 10:18:39 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:06.006 10:18:39 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:22:06.006 10:18:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:06.006 10:18:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:06.006 10:18:39 -- common/autotest_common.sh@10 -- # set +x 00:22:06.006 10:18:39 -- nvmf/common.sh@469 -- # nvmfpid=3493297 00:22:06.006 10:18:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:06.006 10:18:39 -- nvmf/common.sh@470 -- # waitforlisten 3493297 00:22:06.006 10:18:39 -- common/autotest_common.sh@819 -- # '[' -z 3493297 ']' 00:22:06.006 10:18:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.006 10:18:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:06.006 10:18:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.006 10:18:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:06.006 10:18:39 -- common/autotest_common.sh@10 -- # set +x 00:22:06.006 [2024-04-17 10:18:39.154348] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:06.006 [2024-04-17 10:18:39.154403] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.006 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.006 [2024-04-17 10:18:39.231604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.006 [2024-04-17 10:18:39.319525] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:06.006 [2024-04-17 10:18:39.319667] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.006 [2024-04-17 10:18:39.319679] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.006 [2024-04-17 10:18:39.319688] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.006 [2024-04-17 10:18:39.319708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.939 10:18:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:06.939 10:18:40 -- common/autotest_common.sh@852 -- # return 0 00:22:06.939 10:18:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:06.939 10:18:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:06.939 10:18:40 -- common/autotest_common.sh@10 -- # set +x 00:22:06.939 10:18:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.939 10:18:40 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:06.939 10:18:40 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:06.939 10:18:40 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:06.939 [2024-04-17 10:18:40.270302] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.196 10:18:40 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:07.196 10:18:40 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:07.453 [2024-04-17 10:18:40.739547] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:07.453 [2024-04-17 10:18:40.739773] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:07.453 10:18:40 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:07.712 malloc0 00:22:07.712 10:18:40 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:08.035 10:18:41 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:08.317 10:18:41 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:08.317 10:18:41 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:08.317 10:18:41 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:08.317 10:18:41 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:08.317 10:18:41 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:22:08.317 10:18:41 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:08.317 10:18:41 -- target/tls.sh@28 -- # bdevperf_pid=3493656 00:22:08.317 10:18:41 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:08.317 10:18:41 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:08.317 10:18:41 -- target/tls.sh@31 -- # waitforlisten 3493656 /var/tmp/bdevperf.sock 00:22:08.317 10:18:41 -- common/autotest_common.sh@819 -- # '[' -z 3493656 ']' 00:22:08.317 10:18:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:08.317 10:18:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:08.317 10:18:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:08.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:08.317 10:18:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:08.317 10:18:41 -- common/autotest_common.sh@10 -- # set +x 00:22:08.317 [2024-04-17 10:18:41.502449] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:08.317 [2024-04-17 10:18:41.502510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3493656 ] 00:22:08.317 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.317 [2024-04-17 10:18:41.560999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.317 [2024-04-17 10:18:41.628871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:09.253 10:18:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:09.253 10:18:42 -- common/autotest_common.sh@852 -- # return 0 00:22:09.253 10:18:42 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:09.535 [2024-04-17 10:18:42.665599] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:09.535 TLSTESTn1 00:22:09.535 10:18:42 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:09.793 Running I/O for 10 seconds... 00:22:19.761 00:22:19.761 Latency(us) 00:22:19.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.761 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:19.761 Verification LBA range: start 0x0 length 0x2000 00:22:19.761 TLSTESTn1 : 10.03 4469.89 17.46 0.00 0.00 28600.67 5630.14 50760.61 00:22:19.761 =================================================================================================================== 00:22:19.761 Total : 4469.89 17.46 0.00 0.00 28600.67 5630.14 50760.61 00:22:19.761 0 00:22:19.761 10:18:52 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:19.761 10:18:52 -- target/tls.sh@45 -- # killprocess 3493656 00:22:19.761 10:18:52 -- common/autotest_common.sh@926 -- # '[' -z 3493656 ']' 00:22:19.761 10:18:52 -- common/autotest_common.sh@930 -- # kill -0 3493656 00:22:19.761 10:18:52 -- common/autotest_common.sh@931 -- # uname 00:22:19.761 10:18:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:19.761 10:18:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3493656 00:22:19.761 10:18:52 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:19.761 10:18:52 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:19.761 10:18:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3493656' 00:22:19.761 killing process with pid 3493656 00:22:19.761 10:18:52 -- common/autotest_common.sh@945 -- # kill 3493656 00:22:19.761 Received shutdown signal, test time was about 10.000000 seconds 00:22:19.761 00:22:19.761 Latency(us) 00:22:19.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.761 =================================================================================================================== 00:22:19.761 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:19.761 10:18:52 -- common/autotest_common.sh@950 -- # wait 3493656 00:22:20.020 10:18:53 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:20.020 10:18:53 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:20.020 10:18:53 -- common/autotest_common.sh@640 -- # local es=0 00:22:20.020 10:18:53 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:20.020 10:18:53 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:22:20.020 10:18:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:20.020 10:18:53 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:22:20.020 10:18:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:20.020 10:18:53 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:20.020 10:18:53 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:20.020 10:18:53 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:20.020 10:18:53 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:20.020 10:18:53 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:22:20.020 10:18:53 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:20.020 10:18:53 -- target/tls.sh@28 -- # bdevperf_pid=3495729 00:22:20.020 10:18:53 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:20.020 10:18:53 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:20.020 10:18:53 -- target/tls.sh@31 -- # waitforlisten 3495729 /var/tmp/bdevperf.sock 00:22:20.020 10:18:53 -- common/autotest_common.sh@819 -- # '[' -z 3495729 ']' 00:22:20.020 10:18:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:20.020 10:18:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:20.020 10:18:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:20.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:20.020 10:18:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:20.020 10:18:53 -- common/autotest_common.sh@10 -- # set +x 00:22:20.020 [2024-04-17 10:18:53.244376] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:20.020 [2024-04-17 10:18:53.244439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3495729 ] 00:22:20.020 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.020 [2024-04-17 10:18:53.303347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.278 [2024-04-17 10:18:53.366718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.210 10:18:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:21.210 10:18:54 -- common/autotest_common.sh@852 -- # return 0 00:22:21.210 10:18:54 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:21.210 [2024-04-17 10:18:54.399376] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:21.210 [2024-04-17 10:18:54.399410] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:21.210 request: 00:22:21.210 { 00:22:21.210 "name": "TLSTEST", 00:22:21.210 "trtype": "tcp", 00:22:21.210 "traddr": "10.0.0.2", 00:22:21.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:21.210 "adrfam": "ipv4", 00:22:21.210 "trsvcid": "4420", 00:22:21.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.210 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:22:21.210 "method": "bdev_nvme_attach_controller", 00:22:21.210 "req_id": 1 00:22:21.210 } 00:22:21.210 Got JSON-RPC error response 00:22:21.210 response: 00:22:21.210 { 00:22:21.210 "code": -22, 00:22:21.210 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:22:21.210 } 00:22:21.210 10:18:54 -- target/tls.sh@36 -- # killprocess 3495729 00:22:21.210 10:18:54 -- common/autotest_common.sh@926 -- # '[' -z 3495729 ']' 00:22:21.210 10:18:54 -- common/autotest_common.sh@930 -- # kill -0 3495729 00:22:21.210 10:18:54 -- common/autotest_common.sh@931 -- # uname 00:22:21.210 10:18:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:21.210 10:18:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3495729 00:22:21.210 10:18:54 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:21.210 10:18:54 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:21.210 10:18:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3495729' 00:22:21.210 killing process with pid 3495729 00:22:21.210 10:18:54 -- common/autotest_common.sh@945 -- # kill 3495729 00:22:21.210 Received shutdown signal, test time was about 10.000000 seconds 00:22:21.210 00:22:21.210 Latency(us) 00:22:21.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.210 =================================================================================================================== 00:22:21.210 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:21.210 10:18:54 -- common/autotest_common.sh@950 -- # wait 3495729 00:22:21.469 10:18:54 -- target/tls.sh@37 -- # return 1 00:22:21.469 10:18:54 -- common/autotest_common.sh@643 -- # es=1 00:22:21.469 10:18:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:21.469 10:18:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:21.469 10:18:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:21.469 10:18:54 -- target/tls.sh@183 -- # killprocess 3493297 00:22:21.469 10:18:54 -- common/autotest_common.sh@926 -- # '[' -z 3493297 ']' 00:22:21.469 10:18:54 -- common/autotest_common.sh@930 -- # kill -0 3493297 00:22:21.469 10:18:54 -- common/autotest_common.sh@931 -- # uname 00:22:21.469 10:18:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:21.469 10:18:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3493297 00:22:21.469 10:18:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:21.469 10:18:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:21.469 10:18:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3493297' 00:22:21.469 killing process with pid 3493297 00:22:21.469 10:18:54 -- common/autotest_common.sh@945 -- # kill 3493297 00:22:21.469 10:18:54 -- common/autotest_common.sh@950 -- # wait 3493297 00:22:21.727 10:18:54 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:21.727 10:18:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:21.727 10:18:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:21.727 10:18:54 -- common/autotest_common.sh@10 -- # set +x 00:22:21.727 10:18:54 -- nvmf/common.sh@469 -- # nvmfpid=3496137 00:22:21.727 10:18:54 -- nvmf/common.sh@470 -- # waitforlisten 3496137 00:22:21.727 10:18:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:21.727 10:18:54 -- common/autotest_common.sh@819 -- # '[' -z 3496137 ']' 00:22:21.727 10:18:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.727 10:18:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:21.727 10:18:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.727 10:18:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:21.727 10:18:54 -- common/autotest_common.sh@10 -- # set +x 00:22:21.727 [2024-04-17 10:18:55.000045] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:21.727 [2024-04-17 10:18:55.000106] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.727 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.992 [2024-04-17 10:18:55.080155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.992 [2024-04-17 10:18:55.162266] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:21.993 [2024-04-17 10:18:55.162414] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.993 [2024-04-17 10:18:55.162424] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.993 [2024-04-17 10:18:55.162434] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.993 [2024-04-17 10:18:55.162453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.562 10:18:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:22.562 10:18:55 -- common/autotest_common.sh@852 -- # return 0 00:22:22.562 10:18:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:22.562 10:18:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:22.562 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:22:22.562 10:18:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.562 10:18:55 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:22.562 10:18:55 -- common/autotest_common.sh@640 -- # local es=0 00:22:22.562 10:18:55 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:22.562 10:18:55 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:22:22.562 10:18:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:22.562 10:18:55 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:22:22.562 10:18:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:22.562 10:18:55 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:22.562 10:18:55 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:22.562 10:18:55 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:22.819 [2024-04-17 10:18:56.100332] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.819 10:18:56 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:23.077 10:18:56 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:23.335 [2024-04-17 10:18:56.577604] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:23.335 [2024-04-17 10:18:56.577845] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.335 10:18:56 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:23.593 malloc0 00:22:23.593 10:18:56 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:23.851 10:18:57 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:24.109 [2024-04-17 10:18:57.284748] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:24.109 [2024-04-17 10:18:57.284782] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:24.109 [2024-04-17 10:18:57.284804] subsystem.c: 840:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:22:24.109 request: 00:22:24.109 { 00:22:24.109 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.109 "host": "nqn.2016-06.io.spdk:host1", 00:22:24.109 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:22:24.109 "method": "nvmf_subsystem_add_host", 00:22:24.109 "req_id": 1 00:22:24.109 } 00:22:24.109 Got JSON-RPC error response 00:22:24.109 response: 00:22:24.109 { 00:22:24.109 "code": -32603, 00:22:24.109 "message": "Internal error" 00:22:24.109 } 00:22:24.109 10:18:57 -- common/autotest_common.sh@643 -- # es=1 00:22:24.109 10:18:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:24.109 10:18:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:24.109 10:18:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:24.109 10:18:57 -- target/tls.sh@189 -- # killprocess 3496137 00:22:24.109 10:18:57 -- common/autotest_common.sh@926 -- # '[' -z 3496137 ']' 00:22:24.109 10:18:57 -- common/autotest_common.sh@930 -- # kill -0 3496137 00:22:24.109 10:18:57 -- common/autotest_common.sh@931 -- # uname 00:22:24.109 10:18:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:24.109 10:18:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3496137 00:22:24.109 10:18:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:24.109 10:18:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:24.109 10:18:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3496137' 00:22:24.109 killing process with pid 3496137 00:22:24.109 10:18:57 -- common/autotest_common.sh@945 -- # kill 3496137 00:22:24.109 10:18:57 -- common/autotest_common.sh@950 -- # wait 3496137 00:22:24.368 10:18:57 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:24.368 10:18:57 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:22:24.368 10:18:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:24.368 10:18:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:24.368 10:18:57 -- common/autotest_common.sh@10 -- # set +x 00:22:24.368 10:18:57 -- nvmf/common.sh@469 -- # nvmfpid=3496573 00:22:24.368 10:18:57 -- nvmf/common.sh@470 -- # waitforlisten 3496573 00:22:24.368 10:18:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:24.368 10:18:57 -- common/autotest_common.sh@819 -- # '[' -z 3496573 ']' 00:22:24.368 10:18:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.368 10:18:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:24.368 10:18:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.368 10:18:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:24.368 10:18:57 -- common/autotest_common.sh@10 -- # set +x 00:22:24.368 [2024-04-17 10:18:57.650725] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:24.368 [2024-04-17 10:18:57.650781] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:24.368 EAL: No free 2048 kB hugepages reported on node 1 00:22:24.625 [2024-04-17 10:18:57.730486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.625 [2024-04-17 10:18:57.817137] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:24.625 [2024-04-17 10:18:57.817279] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:24.625 [2024-04-17 10:18:57.817291] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:24.625 [2024-04-17 10:18:57.817301] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:24.625 [2024-04-17 10:18:57.817319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.560 10:18:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:25.560 10:18:58 -- common/autotest_common.sh@852 -- # return 0 00:22:25.560 10:18:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:25.560 10:18:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:25.560 10:18:58 -- common/autotest_common.sh@10 -- # set +x 00:22:25.560 10:18:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.560 10:18:58 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:25.560 10:18:58 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:25.560 10:18:58 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:25.560 [2024-04-17 10:18:58.834946] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.560 10:18:58 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:25.818 10:18:59 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:26.076 [2024-04-17 10:18:59.296206] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:26.076 [2024-04-17 10:18:59.296429] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.076 10:18:59 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:26.334 malloc0 00:22:26.334 10:18:59 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:26.592 10:18:59 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:26.851 10:19:00 -- target/tls.sh@197 -- # bdevperf_pid=3497127 00:22:26.851 10:19:00 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:26.851 10:19:00 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:26.851 10:19:00 -- target/tls.sh@200 -- # waitforlisten 3497127 /var/tmp/bdevperf.sock 00:22:26.851 10:19:00 -- common/autotest_common.sh@819 -- # '[' -z 3497127 ']' 00:22:26.851 10:19:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:26.851 10:19:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:26.851 10:19:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:26.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:26.851 10:19:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:26.851 10:19:00 -- common/autotest_common.sh@10 -- # set +x 00:22:26.851 [2024-04-17 10:19:00.068076] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:26.851 [2024-04-17 10:19:00.068140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3497127 ] 00:22:26.851 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.851 [2024-04-17 10:19:00.126055] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.109 [2024-04-17 10:19:00.191791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.675 10:19:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:27.675 10:19:01 -- common/autotest_common.sh@852 -- # return 0 00:22:27.675 10:19:01 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:27.933 [2024-04-17 10:19:01.212578] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:28.192 TLSTESTn1 00:22:28.192 10:19:01 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:28.451 10:19:01 -- target/tls.sh@205 -- # tgtconf='{ 00:22:28.451 "subsystems": [ 00:22:28.451 { 00:22:28.451 "subsystem": "iobuf", 00:22:28.451 "config": [ 00:22:28.451 { 00:22:28.451 "method": "iobuf_set_options", 00:22:28.451 "params": { 00:22:28.451 "small_pool_count": 8192, 00:22:28.451 "large_pool_count": 1024, 00:22:28.451 "small_bufsize": 8192, 00:22:28.451 "large_bufsize": 135168 00:22:28.451 } 00:22:28.451 } 00:22:28.451 ] 00:22:28.451 }, 00:22:28.451 { 00:22:28.451 "subsystem": "sock", 00:22:28.451 "config": [ 00:22:28.451 { 00:22:28.451 "method": "sock_impl_set_options", 00:22:28.451 "params": { 00:22:28.451 "impl_name": "posix", 00:22:28.451 "recv_buf_size": 2097152, 00:22:28.451 "send_buf_size": 2097152, 00:22:28.451 "enable_recv_pipe": true, 00:22:28.451 "enable_quickack": false, 00:22:28.451 "enable_placement_id": 0, 00:22:28.451 "enable_zerocopy_send_server": true, 00:22:28.451 "enable_zerocopy_send_client": false, 00:22:28.451 "zerocopy_threshold": 0, 00:22:28.451 "tls_version": 0, 00:22:28.451 "enable_ktls": false 00:22:28.451 } 00:22:28.451 }, 00:22:28.451 { 00:22:28.451 "method": "sock_impl_set_options", 00:22:28.451 "params": { 00:22:28.451 "impl_name": "ssl", 00:22:28.451 "recv_buf_size": 4096, 00:22:28.451 "send_buf_size": 4096, 00:22:28.451 "enable_recv_pipe": true, 00:22:28.451 "enable_quickack": false, 00:22:28.451 "enable_placement_id": 0, 00:22:28.451 "enable_zerocopy_send_server": true, 00:22:28.451 "enable_zerocopy_send_client": false, 00:22:28.451 "zerocopy_threshold": 0, 00:22:28.451 "tls_version": 0, 00:22:28.451 "enable_ktls": false 00:22:28.451 } 00:22:28.451 } 00:22:28.451 ] 00:22:28.451 }, 00:22:28.451 { 00:22:28.451 "subsystem": "vmd", 00:22:28.451 "config": [] 00:22:28.451 }, 00:22:28.451 { 00:22:28.451 "subsystem": "accel", 00:22:28.451 "config": [ 00:22:28.451 { 00:22:28.451 "method": "accel_set_options", 00:22:28.451 "params": { 00:22:28.451 "small_cache_size": 128, 00:22:28.451 "large_cache_size": 16, 00:22:28.451 "task_count": 2048, 00:22:28.451 "sequence_count": 2048, 00:22:28.451 "buf_count": 2048 00:22:28.451 } 00:22:28.451 } 00:22:28.451 ] 00:22:28.451 }, 00:22:28.451 { 00:22:28.451 "subsystem": "bdev", 00:22:28.451 "config": [ 00:22:28.451 { 00:22:28.451 "method": "bdev_set_options", 00:22:28.451 "params": { 00:22:28.451 "bdev_io_pool_size": 65535, 00:22:28.451 "bdev_io_cache_size": 256, 00:22:28.451 "bdev_auto_examine": true, 00:22:28.451 "iobuf_small_cache_size": 128, 00:22:28.451 "iobuf_large_cache_size": 16 00:22:28.451 } 00:22:28.451 }, 00:22:28.451 { 00:22:28.451 "method": "bdev_raid_set_options", 00:22:28.451 "params": { 00:22:28.451 "process_window_size_kb": 1024 00:22:28.451 } 00:22:28.451 }, 00:22:28.451 { 00:22:28.451 "method": "bdev_iscsi_set_options", 00:22:28.451 "params": { 00:22:28.451 "timeout_sec": 30 00:22:28.451 } 00:22:28.451 }, 00:22:28.451 { 00:22:28.451 "method": "bdev_nvme_set_options", 00:22:28.451 "params": { 00:22:28.451 "action_on_timeout": "none", 00:22:28.451 "timeout_us": 0, 00:22:28.451 "timeout_admin_us": 0, 00:22:28.451 "keep_alive_timeout_ms": 10000, 00:22:28.451 "transport_retry_count": 4, 00:22:28.451 "arbitration_burst": 0, 00:22:28.451 "low_priority_weight": 0, 00:22:28.451 "medium_priority_weight": 0, 00:22:28.451 "high_priority_weight": 0, 00:22:28.452 "nvme_adminq_poll_period_us": 10000, 00:22:28.452 "nvme_ioq_poll_period_us": 0, 00:22:28.452 "io_queue_requests": 0, 00:22:28.452 "delay_cmd_submit": true, 00:22:28.452 "bdev_retry_count": 3, 00:22:28.452 "transport_ack_timeout": 0, 00:22:28.452 "ctrlr_loss_timeout_sec": 0, 00:22:28.452 "reconnect_delay_sec": 0, 00:22:28.452 "fast_io_fail_timeout_sec": 0, 00:22:28.452 "generate_uuids": false, 00:22:28.452 "transport_tos": 0, 00:22:28.452 "io_path_stat": false, 00:22:28.452 "allow_accel_sequence": false 00:22:28.452 } 00:22:28.452 }, 00:22:28.452 { 00:22:28.452 "method": "bdev_nvme_set_hotplug", 00:22:28.452 "params": { 00:22:28.452 "period_us": 100000, 00:22:28.452 "enable": false 00:22:28.452 } 00:22:28.452 }, 00:22:28.452 { 00:22:28.452 "method": "bdev_malloc_create", 00:22:28.452 "params": { 00:22:28.452 "name": "malloc0", 00:22:28.452 "num_blocks": 8192, 00:22:28.452 "block_size": 4096, 00:22:28.452 "physical_block_size": 4096, 00:22:28.452 "uuid": "8eeab319-7aa7-4f29-8a02-d7c682734944", 00:22:28.452 "optimal_io_boundary": 0 00:22:28.452 } 00:22:28.452 }, 00:22:28.452 { 00:22:28.452 "method": "bdev_wait_for_examine" 00:22:28.452 } 00:22:28.452 ] 00:22:28.452 }, 00:22:28.452 { 00:22:28.452 "subsystem": "nbd", 00:22:28.452 "config": [] 00:22:28.452 }, 00:22:28.452 { 00:22:28.452 "subsystem": "scheduler", 00:22:28.452 "config": [ 00:22:28.452 { 00:22:28.452 "method": "framework_set_scheduler", 00:22:28.452 "params": { 00:22:28.452 "name": "static" 00:22:28.452 } 00:22:28.452 } 00:22:28.452 ] 00:22:28.452 }, 00:22:28.452 { 00:22:28.452 "subsystem": "nvmf", 00:22:28.452 "config": [ 00:22:28.452 { 00:22:28.452 "method": "nvmf_set_config", 00:22:28.452 "params": { 00:22:28.452 "discovery_filter": "match_any", 00:22:28.452 "admin_cmd_passthru": { 00:22:28.452 "identify_ctrlr": false 00:22:28.452 } 00:22:28.452 } 00:22:28.452 }, 00:22:28.452 { 00:22:28.452 "method": "nvmf_set_max_subsystems", 00:22:28.452 "params": { 00:22:28.452 "max_subsystems": 1024 00:22:28.452 } 00:22:28.452 }, 00:22:28.452 { 00:22:28.452 "method": "nvmf_set_crdt", 00:22:28.452 "params": { 00:22:28.452 "crdt1": 0, 00:22:28.452 "crdt2": 0, 00:22:28.452 "crdt3": 0 00:22:28.452 } 00:22:28.452 }, 00:22:28.452 { 00:22:28.452 "method": "nvmf_create_transport", 00:22:28.452 "params": { 00:22:28.452 "trtype": "TCP", 00:22:28.452 "max_queue_depth": 128, 00:22:28.452 "max_io_qpairs_per_ctrlr": 127, 00:22:28.452 "in_capsule_data_size": 4096, 00:22:28.452 "max_io_size": 131072, 00:22:28.452 "io_unit_size": 131072, 00:22:28.452 "max_aq_depth": 128, 00:22:28.452 "num_shared_buffers": 511, 00:22:28.452 "buf_cache_size": 4294967295, 00:22:28.452 "dif_insert_or_strip": false, 00:22:28.452 "zcopy": false, 00:22:28.452 "c2h_success": false, 00:22:28.452 "sock_priority": 0, 00:22:28.452 "abort_timeout_sec": 1 00:22:28.452 } 00:22:28.452 }, 00:22:28.452 { 00:22:28.452 "method": "nvmf_create_subsystem", 00:22:28.452 "params": { 00:22:28.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.452 "allow_any_host": false, 00:22:28.452 "serial_number": "SPDK00000000000001", 00:22:28.452 "model_number": "SPDK bdev Controller", 00:22:28.452 "max_namespaces": 10, 00:22:28.452 "min_cntlid": 1, 00:22:28.452 "max_cntlid": 65519, 00:22:28.452 "ana_reporting": false 00:22:28.452 } 00:22:28.452 }, 00:22:28.452 { 00:22:28.452 "method": "nvmf_subsystem_add_host", 00:22:28.452 "params": { 00:22:28.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.452 "host": "nqn.2016-06.io.spdk:host1", 00:22:28.452 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:22:28.452 } 00:22:28.452 }, 00:22:28.452 { 00:22:28.452 "method": "nvmf_subsystem_add_ns", 00:22:28.452 "params": { 00:22:28.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.452 "namespace": { 00:22:28.452 "nsid": 1, 00:22:28.452 "bdev_name": "malloc0", 00:22:28.452 "nguid": "8EEAB3197AA74F298A02D7C682734944", 00:22:28.452 "uuid": "8eeab319-7aa7-4f29-8a02-d7c682734944" 00:22:28.452 } 00:22:28.452 } 00:22:28.452 }, 00:22:28.452 { 00:22:28.452 "method": "nvmf_subsystem_add_listener", 00:22:28.452 "params": { 00:22:28.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.452 "listen_address": { 00:22:28.452 "trtype": "TCP", 00:22:28.452 "adrfam": "IPv4", 00:22:28.452 "traddr": "10.0.0.2", 00:22:28.452 "trsvcid": "4420" 00:22:28.452 }, 00:22:28.452 "secure_channel": true 00:22:28.452 } 00:22:28.452 } 00:22:28.452 ] 00:22:28.452 } 00:22:28.452 ] 00:22:28.452 }' 00:22:28.452 10:19:01 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:28.711 10:19:01 -- target/tls.sh@206 -- # bdevperfconf='{ 00:22:28.711 "subsystems": [ 00:22:28.711 { 00:22:28.711 "subsystem": "iobuf", 00:22:28.711 "config": [ 00:22:28.711 { 00:22:28.711 "method": "iobuf_set_options", 00:22:28.711 "params": { 00:22:28.711 "small_pool_count": 8192, 00:22:28.711 "large_pool_count": 1024, 00:22:28.711 "small_bufsize": 8192, 00:22:28.711 "large_bufsize": 135168 00:22:28.711 } 00:22:28.711 } 00:22:28.711 ] 00:22:28.711 }, 00:22:28.711 { 00:22:28.711 "subsystem": "sock", 00:22:28.711 "config": [ 00:22:28.711 { 00:22:28.711 "method": "sock_impl_set_options", 00:22:28.711 "params": { 00:22:28.711 "impl_name": "posix", 00:22:28.711 "recv_buf_size": 2097152, 00:22:28.711 "send_buf_size": 2097152, 00:22:28.711 "enable_recv_pipe": true, 00:22:28.711 "enable_quickack": false, 00:22:28.711 "enable_placement_id": 0, 00:22:28.711 "enable_zerocopy_send_server": true, 00:22:28.711 "enable_zerocopy_send_client": false, 00:22:28.711 "zerocopy_threshold": 0, 00:22:28.712 "tls_version": 0, 00:22:28.712 "enable_ktls": false 00:22:28.712 } 00:22:28.712 }, 00:22:28.712 { 00:22:28.712 "method": "sock_impl_set_options", 00:22:28.712 "params": { 00:22:28.712 "impl_name": "ssl", 00:22:28.712 "recv_buf_size": 4096, 00:22:28.712 "send_buf_size": 4096, 00:22:28.712 "enable_recv_pipe": true, 00:22:28.712 "enable_quickack": false, 00:22:28.712 "enable_placement_id": 0, 00:22:28.712 "enable_zerocopy_send_server": true, 00:22:28.712 "enable_zerocopy_send_client": false, 00:22:28.712 "zerocopy_threshold": 0, 00:22:28.712 "tls_version": 0, 00:22:28.712 "enable_ktls": false 00:22:28.712 } 00:22:28.712 } 00:22:28.712 ] 00:22:28.712 }, 00:22:28.712 { 00:22:28.712 "subsystem": "vmd", 00:22:28.712 "config": [] 00:22:28.712 }, 00:22:28.712 { 00:22:28.712 "subsystem": "accel", 00:22:28.712 "config": [ 00:22:28.712 { 00:22:28.712 "method": "accel_set_options", 00:22:28.712 "params": { 00:22:28.712 "small_cache_size": 128, 00:22:28.712 "large_cache_size": 16, 00:22:28.712 "task_count": 2048, 00:22:28.712 "sequence_count": 2048, 00:22:28.712 "buf_count": 2048 00:22:28.712 } 00:22:28.712 } 00:22:28.712 ] 00:22:28.712 }, 00:22:28.712 { 00:22:28.712 "subsystem": "bdev", 00:22:28.712 "config": [ 00:22:28.712 { 00:22:28.712 "method": "bdev_set_options", 00:22:28.712 "params": { 00:22:28.712 "bdev_io_pool_size": 65535, 00:22:28.712 "bdev_io_cache_size": 256, 00:22:28.712 "bdev_auto_examine": true, 00:22:28.712 "iobuf_small_cache_size": 128, 00:22:28.712 "iobuf_large_cache_size": 16 00:22:28.712 } 00:22:28.712 }, 00:22:28.712 { 00:22:28.712 "method": "bdev_raid_set_options", 00:22:28.712 "params": { 00:22:28.712 "process_window_size_kb": 1024 00:22:28.712 } 00:22:28.712 }, 00:22:28.712 { 00:22:28.712 "method": "bdev_iscsi_set_options", 00:22:28.712 "params": { 00:22:28.712 "timeout_sec": 30 00:22:28.712 } 00:22:28.712 }, 00:22:28.712 { 00:22:28.712 "method": "bdev_nvme_set_options", 00:22:28.712 "params": { 00:22:28.712 "action_on_timeout": "none", 00:22:28.712 "timeout_us": 0, 00:22:28.712 "timeout_admin_us": 0, 00:22:28.712 "keep_alive_timeout_ms": 10000, 00:22:28.712 "transport_retry_count": 4, 00:22:28.712 "arbitration_burst": 0, 00:22:28.712 "low_priority_weight": 0, 00:22:28.712 "medium_priority_weight": 0, 00:22:28.712 "high_priority_weight": 0, 00:22:28.712 "nvme_adminq_poll_period_us": 10000, 00:22:28.712 "nvme_ioq_poll_period_us": 0, 00:22:28.712 "io_queue_requests": 512, 00:22:28.712 "delay_cmd_submit": true, 00:22:28.712 "bdev_retry_count": 3, 00:22:28.712 "transport_ack_timeout": 0, 00:22:28.712 "ctrlr_loss_timeout_sec": 0, 00:22:28.712 "reconnect_delay_sec": 0, 00:22:28.712 "fast_io_fail_timeout_sec": 0, 00:22:28.712 "generate_uuids": false, 00:22:28.712 "transport_tos": 0, 00:22:28.712 "io_path_stat": false, 00:22:28.712 "allow_accel_sequence": false 00:22:28.712 } 00:22:28.712 }, 00:22:28.712 { 00:22:28.712 "method": "bdev_nvme_attach_controller", 00:22:28.712 "params": { 00:22:28.712 "name": "TLSTEST", 00:22:28.712 "trtype": "TCP", 00:22:28.712 "adrfam": "IPv4", 00:22:28.712 "traddr": "10.0.0.2", 00:22:28.712 "trsvcid": "4420", 00:22:28.712 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.712 "prchk_reftag": false, 00:22:28.712 "prchk_guard": false, 00:22:28.712 "ctrlr_loss_timeout_sec": 0, 00:22:28.712 "reconnect_delay_sec": 0, 00:22:28.712 "fast_io_fail_timeout_sec": 0, 00:22:28.712 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:22:28.712 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:28.712 "hdgst": false, 00:22:28.712 "ddgst": false 00:22:28.712 } 00:22:28.712 }, 00:22:28.712 { 00:22:28.712 "method": "bdev_nvme_set_hotplug", 00:22:28.712 "params": { 00:22:28.712 "period_us": 100000, 00:22:28.712 "enable": false 00:22:28.712 } 00:22:28.712 }, 00:22:28.712 { 00:22:28.712 "method": "bdev_wait_for_examine" 00:22:28.712 } 00:22:28.712 ] 00:22:28.712 }, 00:22:28.712 { 00:22:28.712 "subsystem": "nbd", 00:22:28.712 "config": [] 00:22:28.712 } 00:22:28.712 ] 00:22:28.712 }' 00:22:28.712 10:19:01 -- target/tls.sh@208 -- # killprocess 3497127 00:22:28.712 10:19:01 -- common/autotest_common.sh@926 -- # '[' -z 3497127 ']' 00:22:28.712 10:19:01 -- common/autotest_common.sh@930 -- # kill -0 3497127 00:22:28.712 10:19:01 -- common/autotest_common.sh@931 -- # uname 00:22:28.712 10:19:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:28.712 10:19:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3497127 00:22:28.712 10:19:01 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:28.712 10:19:01 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:28.712 10:19:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3497127' 00:22:28.712 killing process with pid 3497127 00:22:28.712 10:19:01 -- common/autotest_common.sh@945 -- # kill 3497127 00:22:28.712 Received shutdown signal, test time was about 10.000000 seconds 00:22:28.712 00:22:28.712 Latency(us) 00:22:28.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.712 =================================================================================================================== 00:22:28.712 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:28.712 10:19:01 -- common/autotest_common.sh@950 -- # wait 3497127 00:22:28.971 10:19:02 -- target/tls.sh@209 -- # killprocess 3496573 00:22:28.971 10:19:02 -- common/autotest_common.sh@926 -- # '[' -z 3496573 ']' 00:22:28.971 10:19:02 -- common/autotest_common.sh@930 -- # kill -0 3496573 00:22:28.971 10:19:02 -- common/autotest_common.sh@931 -- # uname 00:22:28.971 10:19:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:28.971 10:19:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3496573 00:22:28.971 10:19:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:28.971 10:19:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:28.971 10:19:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3496573' 00:22:28.971 killing process with pid 3496573 00:22:28.971 10:19:02 -- common/autotest_common.sh@945 -- # kill 3496573 00:22:28.971 10:19:02 -- common/autotest_common.sh@950 -- # wait 3496573 00:22:29.230 10:19:02 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:29.230 10:19:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:29.230 10:19:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:29.230 10:19:02 -- target/tls.sh@212 -- # echo '{ 00:22:29.230 "subsystems": [ 00:22:29.230 { 00:22:29.230 "subsystem": "iobuf", 00:22:29.230 "config": [ 00:22:29.230 { 00:22:29.230 "method": "iobuf_set_options", 00:22:29.230 "params": { 00:22:29.230 "small_pool_count": 8192, 00:22:29.230 "large_pool_count": 1024, 00:22:29.230 "small_bufsize": 8192, 00:22:29.230 "large_bufsize": 135168 00:22:29.230 } 00:22:29.230 } 00:22:29.230 ] 00:22:29.230 }, 00:22:29.230 { 00:22:29.230 "subsystem": "sock", 00:22:29.230 "config": [ 00:22:29.230 { 00:22:29.230 "method": "sock_impl_set_options", 00:22:29.230 "params": { 00:22:29.230 "impl_name": "posix", 00:22:29.230 "recv_buf_size": 2097152, 00:22:29.230 "send_buf_size": 2097152, 00:22:29.230 "enable_recv_pipe": true, 00:22:29.230 "enable_quickack": false, 00:22:29.230 "enable_placement_id": 0, 00:22:29.230 "enable_zerocopy_send_server": true, 00:22:29.230 "enable_zerocopy_send_client": false, 00:22:29.230 "zerocopy_threshold": 0, 00:22:29.230 "tls_version": 0, 00:22:29.230 "enable_ktls": false 00:22:29.230 } 00:22:29.230 }, 00:22:29.230 { 00:22:29.230 "method": "sock_impl_set_options", 00:22:29.230 "params": { 00:22:29.230 "impl_name": "ssl", 00:22:29.230 "recv_buf_size": 4096, 00:22:29.230 "send_buf_size": 4096, 00:22:29.230 "enable_recv_pipe": true, 00:22:29.230 "enable_quickack": false, 00:22:29.230 "enable_placement_id": 0, 00:22:29.230 "enable_zerocopy_send_server": true, 00:22:29.230 "enable_zerocopy_send_client": false, 00:22:29.230 "zerocopy_threshold": 0, 00:22:29.230 "tls_version": 0, 00:22:29.230 "enable_ktls": false 00:22:29.230 } 00:22:29.230 } 00:22:29.230 ] 00:22:29.230 }, 00:22:29.230 { 00:22:29.230 "subsystem": "vmd", 00:22:29.230 "config": [] 00:22:29.230 }, 00:22:29.230 { 00:22:29.230 "subsystem": "accel", 00:22:29.230 "config": [ 00:22:29.230 { 00:22:29.230 "method": "accel_set_options", 00:22:29.230 "params": { 00:22:29.230 "small_cache_size": 128, 00:22:29.230 "large_cache_size": 16, 00:22:29.230 "task_count": 2048, 00:22:29.230 "sequence_count": 2048, 00:22:29.230 "buf_count": 2048 00:22:29.230 } 00:22:29.230 } 00:22:29.230 ] 00:22:29.230 }, 00:22:29.230 { 00:22:29.230 "subsystem": "bdev", 00:22:29.230 "config": [ 00:22:29.230 { 00:22:29.230 "method": "bdev_set_options", 00:22:29.230 "params": { 00:22:29.230 "bdev_io_pool_size": 65535, 00:22:29.230 "bdev_io_cache_size": 256, 00:22:29.230 "bdev_auto_examine": true, 00:22:29.230 "iobuf_small_cache_size": 128, 00:22:29.230 "iobuf_large_cache_size": 16 00:22:29.230 } 00:22:29.230 }, 00:22:29.230 { 00:22:29.230 "method": "bdev_raid_set_options", 00:22:29.230 "params": { 00:22:29.230 "process_window_size_kb": 1024 00:22:29.230 } 00:22:29.230 }, 00:22:29.231 { 00:22:29.231 "method": "bdev_iscsi_set_options", 00:22:29.231 "params": { 00:22:29.231 "timeout_sec": 30 00:22:29.231 } 00:22:29.231 }, 00:22:29.231 { 00:22:29.231 "method": "bdev_nvme_set_options", 00:22:29.231 "params": { 00:22:29.231 "action_on_timeout": "none", 00:22:29.231 "timeout_us": 0, 00:22:29.231 "timeout_admin_us": 0, 00:22:29.231 "keep_alive_timeout_ms": 10000, 00:22:29.231 "transport_retry_count": 4, 00:22:29.231 "arbitration_burst": 0, 00:22:29.231 "low_priority_weight": 0, 00:22:29.231 "medium_priority_weight": 0, 00:22:29.231 "high_priority_weight": 0, 00:22:29.231 "nvme_adminq_poll_period_us": 10000, 00:22:29.231 "nvme_ioq_poll_period_us": 0, 00:22:29.231 "io_queue_requests": 0, 00:22:29.231 "delay_cmd_submit": true, 00:22:29.231 "bdev_retry_count": 3, 00:22:29.231 "transport_ack_timeout": 0, 00:22:29.231 "ctrlr_loss_timeout_sec": 0, 00:22:29.231 "reconnect_delay_sec": 0, 00:22:29.231 "fast_io_fail_timeout_sec": 0, 00:22:29.231 "generate_uuids": false, 00:22:29.231 "transport_tos": 0, 00:22:29.231 "io_path_stat": false, 00:22:29.231 "allow_accel_sequence": false 00:22:29.231 } 00:22:29.231 }, 00:22:29.231 { 00:22:29.231 "method": "bdev_nvme_set_hotplug", 00:22:29.231 "params": { 00:22:29.231 "period_us": 100000, 00:22:29.231 "enable": false 00:22:29.231 } 00:22:29.231 }, 00:22:29.231 { 00:22:29.231 "method": "bdev_malloc_create", 00:22:29.231 "params": { 00:22:29.231 "name": "malloc0", 00:22:29.231 "num_blocks": 8192, 00:22:29.231 "block_size": 4096, 00:22:29.231 "physical_block_size": 4096, 00:22:29.231 "uuid": "8eeab319-7aa7-4f29-8a02-d7c682734944", 00:22:29.231 "optimal_io_boundary": 0 00:22:29.231 } 00:22:29.231 }, 00:22:29.231 { 00:22:29.231 "method": "bdev_wait_for_examine" 00:22:29.231 } 00:22:29.231 ] 00:22:29.231 }, 00:22:29.231 { 00:22:29.231 "subsystem": "nbd", 00:22:29.231 "config": [] 00:22:29.231 }, 00:22:29.231 { 00:22:29.231 "subsystem": "scheduler", 00:22:29.231 "config": [ 00:22:29.231 { 00:22:29.231 "method": "framework_set_scheduler", 00:22:29.231 "params": { 00:22:29.231 "name": "static" 00:22:29.231 } 00:22:29.231 } 00:22:29.231 ] 00:22:29.231 }, 00:22:29.231 { 00:22:29.231 "subsystem": "nvmf", 00:22:29.231 "config": [ 00:22:29.231 { 00:22:29.231 "method": "nvmf_set_config", 00:22:29.231 "params": { 00:22:29.231 "discovery_filter": "match_any", 00:22:29.231 "admin_cmd_passthru": { 00:22:29.231 "identify_ctrlr": false 00:22:29.231 } 00:22:29.231 } 00:22:29.231 }, 00:22:29.231 { 00:22:29.231 "method": "nvmf_set_max_subsystems", 00:22:29.231 "params": { 00:22:29.231 "max_subsystems": 1024 00:22:29.231 } 00:22:29.231 }, 00:22:29.231 { 00:22:29.231 "method": "nvmf_set_crdt", 00:22:29.231 "params": { 00:22:29.231 "crdt1": 0, 00:22:29.231 "crdt2": 0, 00:22:29.231 "crdt3": 0 00:22:29.231 } 00:22:29.231 }, 00:22:29.231 { 00:22:29.231 "method": "nvmf_create_transport", 00:22:29.231 "params": { 00:22:29.231 "trtype": "TCP", 00:22:29.231 "max_queue_depth": 128, 00:22:29.231 "max_io_qpairs_per_ctrlr": 127, 00:22:29.231 "in_capsule_data_size": 4096, 00:22:29.231 "max_io_size": 131072, 00:22:29.231 "io_unit_size": 131072, 00:22:29.231 "max_aq_depth": 128, 00:22:29.231 "num_shared_buffers": 511, 00:22:29.231 "buf_cache_size": 4294967295, 00:22:29.231 "dif_insert_or_strip": false, 00:22:29.231 "zcopy": false, 00:22:29.231 "c2h_success": false, 00:22:29.231 "sock_priority": 0, 00:22:29.231 "abort_timeout_sec": 1 00:22:29.231 } 00:22:29.231 }, 00:22:29.231 { 00:22:29.231 "method": "nvmf_create_subsystem", 00:22:29.231 "params": { 00:22:29.231 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.232 "allow_any_host": false, 00:22:29.232 "serial_number": "SPDK00000000000001", 00:22:29.232 "model_number": "SPDK bdev Controller", 00:22:29.232 "max_namespaces": 10, 00:22:29.232 "min_cntlid": 1, 00:22:29.232 "max_cntlid": 65519, 00:22:29.232 "ana_reporting": false 00:22:29.232 } 00:22:29.232 }, 00:22:29.232 { 00:22:29.232 "method": "nvmf_subsystem_add_host", 00:22:29.232 "params": { 00:22:29.232 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.232 "host": "nqn.2016-06.io.spdk:host1", 00:22:29.232 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:22:29.232 } 00:22:29.232 }, 00:22:29.232 { 00:22:29.232 "method": "nvmf_subsystem_add_ns", 00:22:29.232 "params": { 00:22:29.232 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.232 "namespace": { 00:22:29.232 "nsid": 1, 00:22:29.232 "bdev_name": "malloc0", 00:22:29.232 "nguid": "8EEAB3197AA74F298A02D7C682734944", 00:22:29.232 "uuid": "8eeab319-7aa7-4f29-8a02-d7c682734944" 00:22:29.232 } 00:22:29.232 } 00:22:29.232 }, 00:22:29.232 { 00:22:29.232 "method": "nvmf_subsystem_add_listener", 00:22:29.232 "params": { 00:22:29.232 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.232 "listen_address": { 00:22:29.232 "trtype": "TCP", 00:22:29.232 "adrfam": "IPv4", 00:22:29.232 "traddr": "10.0.0.2", 00:22:29.232 "trsvcid": "4420" 00:22:29.232 }, 00:22:29.232 "secure_channel": true 00:22:29.232 } 00:22:29.232 } 00:22:29.232 ] 00:22:29.232 } 00:22:29.232 ] 00:22:29.232 }' 00:22:29.232 10:19:02 -- common/autotest_common.sh@10 -- # set +x 00:22:29.232 10:19:02 -- nvmf/common.sh@469 -- # nvmfpid=3497601 00:22:29.232 10:19:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:29.232 10:19:02 -- nvmf/common.sh@470 -- # waitforlisten 3497601 00:22:29.232 10:19:02 -- common/autotest_common.sh@819 -- # '[' -z 3497601 ']' 00:22:29.232 10:19:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.232 10:19:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:29.232 10:19:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.232 10:19:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:29.232 10:19:02 -- common/autotest_common.sh@10 -- # set +x 00:22:29.232 [2024-04-17 10:19:02.494961] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:29.232 [2024-04-17 10:19:02.495005] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.232 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.232 [2024-04-17 10:19:02.559969] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.490 [2024-04-17 10:19:02.645797] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:29.490 [2024-04-17 10:19:02.645941] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.490 [2024-04-17 10:19:02.645952] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.490 [2024-04-17 10:19:02.645961] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.490 [2024-04-17 10:19:02.645982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.747 [2024-04-17 10:19:02.848326] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.747 [2024-04-17 10:19:02.880322] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:29.747 [2024-04-17 10:19:02.880541] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.314 10:19:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:30.314 10:19:03 -- common/autotest_common.sh@852 -- # return 0 00:22:30.314 10:19:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:30.314 10:19:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:30.314 10:19:03 -- common/autotest_common.sh@10 -- # set +x 00:22:30.314 10:19:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.314 10:19:03 -- target/tls.sh@216 -- # bdevperf_pid=3497703 00:22:30.314 10:19:03 -- target/tls.sh@217 -- # waitforlisten 3497703 /var/tmp/bdevperf.sock 00:22:30.314 10:19:03 -- common/autotest_common.sh@819 -- # '[' -z 3497703 ']' 00:22:30.314 10:19:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:30.314 10:19:03 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:30.314 10:19:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:30.314 10:19:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:30.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:30.314 10:19:03 -- target/tls.sh@213 -- # echo '{ 00:22:30.314 "subsystems": [ 00:22:30.314 { 00:22:30.314 "subsystem": "iobuf", 00:22:30.314 "config": [ 00:22:30.314 { 00:22:30.314 "method": "iobuf_set_options", 00:22:30.314 "params": { 00:22:30.314 "small_pool_count": 8192, 00:22:30.314 "large_pool_count": 1024, 00:22:30.314 "small_bufsize": 8192, 00:22:30.314 "large_bufsize": 135168 00:22:30.314 } 00:22:30.314 } 00:22:30.314 ] 00:22:30.314 }, 00:22:30.314 { 00:22:30.314 "subsystem": "sock", 00:22:30.314 "config": [ 00:22:30.314 { 00:22:30.314 "method": "sock_impl_set_options", 00:22:30.314 "params": { 00:22:30.314 "impl_name": "posix", 00:22:30.314 "recv_buf_size": 2097152, 00:22:30.314 "send_buf_size": 2097152, 00:22:30.314 "enable_recv_pipe": true, 00:22:30.314 "enable_quickack": false, 00:22:30.314 "enable_placement_id": 0, 00:22:30.314 "enable_zerocopy_send_server": true, 00:22:30.314 "enable_zerocopy_send_client": false, 00:22:30.314 "zerocopy_threshold": 0, 00:22:30.314 "tls_version": 0, 00:22:30.314 "enable_ktls": false 00:22:30.314 } 00:22:30.314 }, 00:22:30.314 { 00:22:30.314 "method": "sock_impl_set_options", 00:22:30.314 "params": { 00:22:30.314 "impl_name": "ssl", 00:22:30.314 "recv_buf_size": 4096, 00:22:30.314 "send_buf_size": 4096, 00:22:30.314 "enable_recv_pipe": true, 00:22:30.314 "enable_quickack": false, 00:22:30.314 "enable_placement_id": 0, 00:22:30.314 "enable_zerocopy_send_server": true, 00:22:30.314 "enable_zerocopy_send_client": false, 00:22:30.314 "zerocopy_threshold": 0, 00:22:30.314 "tls_version": 0, 00:22:30.314 "enable_ktls": false 00:22:30.314 } 00:22:30.314 } 00:22:30.314 ] 00:22:30.314 }, 00:22:30.314 { 00:22:30.314 "subsystem": "vmd", 00:22:30.314 "config": [] 00:22:30.314 }, 00:22:30.314 { 00:22:30.314 "subsystem": "accel", 00:22:30.314 "config": [ 00:22:30.314 { 00:22:30.314 "method": "accel_set_options", 00:22:30.314 "params": { 00:22:30.314 "small_cache_size": 128, 00:22:30.314 "large_cache_size": 16, 00:22:30.314 "task_count": 2048, 00:22:30.314 "sequence_count": 2048, 00:22:30.314 "buf_count": 2048 00:22:30.314 } 00:22:30.314 } 00:22:30.314 ] 00:22:30.314 }, 00:22:30.314 { 00:22:30.314 "subsystem": "bdev", 00:22:30.314 "config": [ 00:22:30.314 { 00:22:30.314 "method": "bdev_set_options", 00:22:30.314 "params": { 00:22:30.314 "bdev_io_pool_size": 65535, 00:22:30.314 "bdev_io_cache_size": 256, 00:22:30.314 "bdev_auto_examine": true, 00:22:30.314 "iobuf_small_cache_size": 128, 00:22:30.314 "iobuf_large_cache_size": 16 00:22:30.314 } 00:22:30.314 }, 00:22:30.314 { 00:22:30.314 "method": "bdev_raid_set_options", 00:22:30.314 "params": { 00:22:30.314 "process_window_size_kb": 1024 00:22:30.314 } 00:22:30.314 }, 00:22:30.314 { 00:22:30.314 "method": "bdev_iscsi_set_options", 00:22:30.314 "params": { 00:22:30.314 "timeout_sec": 30 00:22:30.314 } 00:22:30.314 }, 00:22:30.314 { 00:22:30.314 "method": "bdev_nvme_set_options", 00:22:30.314 "params": { 00:22:30.314 "action_on_timeout": "none", 00:22:30.314 "timeout_us": 0, 00:22:30.314 "timeout_admin_us": 0, 00:22:30.314 "keep_alive_timeout_ms": 10000, 00:22:30.314 "transport_retry_count": 4, 00:22:30.314 "arbitration_burst": 0, 00:22:30.314 "low_priority_weight": 0, 00:22:30.314 "medium_priority_weight": 0, 00:22:30.314 "high_priority_weight": 0, 00:22:30.314 "nvme_adminq_poll_period_us": 10000, 00:22:30.314 "nvme_ioq_poll_period_us": 0, 00:22:30.314 "io_queue_requests": 512, 00:22:30.314 "delay_cmd_submit": true, 00:22:30.314 "bdev_retry_count": 3, 00:22:30.314 "transport_ack_timeout": 0, 00:22:30.314 "ctrlr_loss_timeout_sec": 0, 00:22:30.314 "reconnect_delay_sec": 0, 00:22:30.314 "fast_io_fail_timeout_sec": 0, 00:22:30.314 "generate_uuids": false, 00:22:30.314 "transport_tos": 0, 00:22:30.314 "io_path_stat": false, 00:22:30.314 "allow_accel_sequence": false 00:22:30.314 } 00:22:30.314 }, 00:22:30.314 { 00:22:30.314 "method": "bdev_nvme_attach_controller", 00:22:30.314 "params": { 00:22:30.314 "name": "TLSTEST", 00:22:30.314 "trtype": "TCP", 00:22:30.314 "adrfam": "IPv4", 00:22:30.314 "traddr": "10.0.0.2", 00:22:30.314 "trsvcid": "4420", 00:22:30.314 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.314 "prchk_reftag": false, 00:22:30.314 "prchk_guard": false, 00:22:30.314 "ctrlr_loss_timeout_sec": 0, 00:22:30.314 "reconnect_delay_sec": 0, 00:22:30.314 "fast_io_fail_timeout_sec": 0, 00:22:30.314 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:22:30.314 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:30.314 "hdgst": false, 00:22:30.314 "ddgst": false 00:22:30.314 } 00:22:30.314 }, 00:22:30.314 { 00:22:30.314 "method": "bdev_nvme_set_hotplug", 00:22:30.314 "params": { 00:22:30.314 "period_us": 100000, 00:22:30.314 "enable": false 00:22:30.314 } 00:22:30.314 }, 00:22:30.314 { 00:22:30.314 "method": "bdev_wait_for_examine" 00:22:30.314 } 00:22:30.314 ] 00:22:30.314 }, 00:22:30.314 { 00:22:30.314 "subsystem": "nbd", 00:22:30.314 "config": [] 00:22:30.314 } 00:22:30.314 ] 00:22:30.314 }' 00:22:30.314 10:19:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:30.314 10:19:03 -- common/autotest_common.sh@10 -- # set +x 00:22:30.314 [2024-04-17 10:19:03.505220] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:30.314 [2024-04-17 10:19:03.505279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3497703 ] 00:22:30.314 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.314 [2024-04-17 10:19:03.563007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.314 [2024-04-17 10:19:03.628350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:30.573 [2024-04-17 10:19:03.759270] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:31.139 10:19:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:31.139 10:19:04 -- common/autotest_common.sh@852 -- # return 0 00:22:31.139 10:19:04 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:31.397 Running I/O for 10 seconds... 00:22:41.365 00:22:41.365 Latency(us) 00:22:41.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.365 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:41.365 Verification LBA range: start 0x0 length 0x2000 00:22:41.365 TLSTESTn1 : 10.02 4362.93 17.04 0.00 0.00 29305.90 4140.68 47424.23 00:22:41.365 =================================================================================================================== 00:22:41.365 Total : 4362.93 17.04 0.00 0.00 29305.90 4140.68 47424.23 00:22:41.365 0 00:22:41.365 10:19:14 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:41.365 10:19:14 -- target/tls.sh@223 -- # killprocess 3497703 00:22:41.365 10:19:14 -- common/autotest_common.sh@926 -- # '[' -z 3497703 ']' 00:22:41.365 10:19:14 -- common/autotest_common.sh@930 -- # kill -0 3497703 00:22:41.365 10:19:14 -- common/autotest_common.sh@931 -- # uname 00:22:41.365 10:19:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:41.365 10:19:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3497703 00:22:41.365 10:19:14 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:41.365 10:19:14 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:41.365 10:19:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3497703' 00:22:41.365 killing process with pid 3497703 00:22:41.365 10:19:14 -- common/autotest_common.sh@945 -- # kill 3497703 00:22:41.365 Received shutdown signal, test time was about 10.000000 seconds 00:22:41.365 00:22:41.365 Latency(us) 00:22:41.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.365 =================================================================================================================== 00:22:41.365 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:41.365 10:19:14 -- common/autotest_common.sh@950 -- # wait 3497703 00:22:41.624 10:19:14 -- target/tls.sh@224 -- # killprocess 3497601 00:22:41.624 10:19:14 -- common/autotest_common.sh@926 -- # '[' -z 3497601 ']' 00:22:41.624 10:19:14 -- common/autotest_common.sh@930 -- # kill -0 3497601 00:22:41.624 10:19:14 -- common/autotest_common.sh@931 -- # uname 00:22:41.624 10:19:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:41.624 10:19:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3497601 00:22:41.624 10:19:14 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:41.624 10:19:14 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:41.624 10:19:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3497601' 00:22:41.624 killing process with pid 3497601 00:22:41.624 10:19:14 -- common/autotest_common.sh@945 -- # kill 3497601 00:22:41.624 10:19:14 -- common/autotest_common.sh@950 -- # wait 3497601 00:22:41.883 10:19:15 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:22:41.883 10:19:15 -- target/tls.sh@227 -- # cleanup 00:22:41.883 10:19:15 -- target/tls.sh@15 -- # process_shm --id 0 00:22:41.883 10:19:15 -- common/autotest_common.sh@796 -- # type=--id 00:22:41.883 10:19:15 -- common/autotest_common.sh@797 -- # id=0 00:22:41.883 10:19:15 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:22:41.883 10:19:15 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:41.883 10:19:15 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:22:41.883 10:19:15 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:22:41.883 10:19:15 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:22:41.883 10:19:15 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:41.883 nvmf_trace.0 00:22:41.883 10:19:15 -- common/autotest_common.sh@811 -- # return 0 00:22:41.883 10:19:15 -- target/tls.sh@16 -- # killprocess 3497703 00:22:41.883 10:19:15 -- common/autotest_common.sh@926 -- # '[' -z 3497703 ']' 00:22:41.883 10:19:15 -- common/autotest_common.sh@930 -- # kill -0 3497703 00:22:41.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3497703) - No such process 00:22:41.883 10:19:15 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3497703 is not found' 00:22:41.883 Process with pid 3497703 is not found 00:22:41.883 10:19:15 -- target/tls.sh@17 -- # nvmftestfini 00:22:41.883 10:19:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:41.883 10:19:15 -- nvmf/common.sh@116 -- # sync 00:22:42.142 10:19:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:42.142 10:19:15 -- nvmf/common.sh@119 -- # set +e 00:22:42.142 10:19:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:42.142 10:19:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:42.142 rmmod nvme_tcp 00:22:42.142 rmmod nvme_fabrics 00:22:42.142 rmmod nvme_keyring 00:22:42.142 10:19:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:42.142 10:19:15 -- nvmf/common.sh@123 -- # set -e 00:22:42.142 10:19:15 -- nvmf/common.sh@124 -- # return 0 00:22:42.142 10:19:15 -- nvmf/common.sh@477 -- # '[' -n 3497601 ']' 00:22:42.142 10:19:15 -- nvmf/common.sh@478 -- # killprocess 3497601 00:22:42.142 10:19:15 -- common/autotest_common.sh@926 -- # '[' -z 3497601 ']' 00:22:42.142 10:19:15 -- common/autotest_common.sh@930 -- # kill -0 3497601 00:22:42.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3497601) - No such process 00:22:42.142 10:19:15 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3497601 is not found' 00:22:42.142 Process with pid 3497601 is not found 00:22:42.142 10:19:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:42.142 10:19:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:42.142 10:19:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:42.142 10:19:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:42.142 10:19:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:42.142 10:19:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.142 10:19:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.142 10:19:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.044 10:19:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:44.044 10:19:17 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:44.044 00:22:44.044 real 1m16.950s 00:22:44.044 user 1m59.273s 00:22:44.044 sys 0m26.523s 00:22:44.044 10:19:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:44.044 10:19:17 -- common/autotest_common.sh@10 -- # set +x 00:22:44.044 ************************************ 00:22:44.044 END TEST nvmf_tls 00:22:44.044 ************************************ 00:22:44.044 10:19:17 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:44.044 10:19:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:44.044 10:19:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:44.044 10:19:17 -- common/autotest_common.sh@10 -- # set +x 00:22:44.044 ************************************ 00:22:44.044 START TEST nvmf_fips 00:22:44.044 ************************************ 00:22:44.044 10:19:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:44.303 * Looking for test storage... 00:22:44.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:44.303 10:19:17 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:44.303 10:19:17 -- nvmf/common.sh@7 -- # uname -s 00:22:44.303 10:19:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:44.303 10:19:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:44.303 10:19:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:44.303 10:19:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:44.303 10:19:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:44.303 10:19:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:44.303 10:19:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:44.303 10:19:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:44.303 10:19:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:44.303 10:19:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:44.303 10:19:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:44.303 10:19:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:44.303 10:19:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:44.303 10:19:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:44.303 10:19:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:44.303 10:19:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:44.303 10:19:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.303 10:19:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.303 10:19:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.303 10:19:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.303 10:19:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.303 10:19:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.303 10:19:17 -- paths/export.sh@5 -- # export PATH 00:22:44.303 10:19:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.303 10:19:17 -- nvmf/common.sh@46 -- # : 0 00:22:44.303 10:19:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:44.303 10:19:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:44.303 10:19:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:44.303 10:19:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:44.303 10:19:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:44.303 10:19:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:44.303 10:19:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:44.303 10:19:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:44.303 10:19:17 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:44.303 10:19:17 -- fips/fips.sh@89 -- # check_openssl_version 00:22:44.303 10:19:17 -- fips/fips.sh@83 -- # local target=3.0.0 00:22:44.303 10:19:17 -- fips/fips.sh@85 -- # openssl version 00:22:44.303 10:19:17 -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:44.303 10:19:17 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:44.303 10:19:17 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:44.303 10:19:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:44.303 10:19:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:44.303 10:19:17 -- scripts/common.sh@335 -- # IFS=.-: 00:22:44.303 10:19:17 -- scripts/common.sh@335 -- # read -ra ver1 00:22:44.303 10:19:17 -- scripts/common.sh@336 -- # IFS=.-: 00:22:44.303 10:19:17 -- scripts/common.sh@336 -- # read -ra ver2 00:22:44.303 10:19:17 -- scripts/common.sh@337 -- # local 'op=>=' 00:22:44.303 10:19:17 -- scripts/common.sh@339 -- # ver1_l=3 00:22:44.303 10:19:17 -- scripts/common.sh@340 -- # ver2_l=3 00:22:44.303 10:19:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:44.303 10:19:17 -- scripts/common.sh@343 -- # case "$op" in 00:22:44.303 10:19:17 -- scripts/common.sh@347 -- # : 1 00:22:44.303 10:19:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:44.303 10:19:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:44.303 10:19:17 -- scripts/common.sh@364 -- # decimal 3 00:22:44.303 10:19:17 -- scripts/common.sh@352 -- # local d=3 00:22:44.303 10:19:17 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:44.304 10:19:17 -- scripts/common.sh@354 -- # echo 3 00:22:44.304 10:19:17 -- scripts/common.sh@364 -- # ver1[v]=3 00:22:44.304 10:19:17 -- scripts/common.sh@365 -- # decimal 3 00:22:44.304 10:19:17 -- scripts/common.sh@352 -- # local d=3 00:22:44.304 10:19:17 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:44.304 10:19:17 -- scripts/common.sh@354 -- # echo 3 00:22:44.304 10:19:17 -- scripts/common.sh@365 -- # ver2[v]=3 00:22:44.304 10:19:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:44.304 10:19:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:44.304 10:19:17 -- scripts/common.sh@363 -- # (( v++ )) 00:22:44.304 10:19:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:44.304 10:19:17 -- scripts/common.sh@364 -- # decimal 0 00:22:44.304 10:19:17 -- scripts/common.sh@352 -- # local d=0 00:22:44.304 10:19:17 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:44.304 10:19:17 -- scripts/common.sh@354 -- # echo 0 00:22:44.304 10:19:17 -- scripts/common.sh@364 -- # ver1[v]=0 00:22:44.304 10:19:17 -- scripts/common.sh@365 -- # decimal 0 00:22:44.304 10:19:17 -- scripts/common.sh@352 -- # local d=0 00:22:44.304 10:19:17 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:44.304 10:19:17 -- scripts/common.sh@354 -- # echo 0 00:22:44.304 10:19:17 -- scripts/common.sh@365 -- # ver2[v]=0 00:22:44.304 10:19:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:44.304 10:19:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:44.304 10:19:17 -- scripts/common.sh@363 -- # (( v++ )) 00:22:44.304 10:19:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:44.304 10:19:17 -- scripts/common.sh@364 -- # decimal 9 00:22:44.304 10:19:17 -- scripts/common.sh@352 -- # local d=9 00:22:44.304 10:19:17 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:44.304 10:19:17 -- scripts/common.sh@354 -- # echo 9 00:22:44.304 10:19:17 -- scripts/common.sh@364 -- # ver1[v]=9 00:22:44.304 10:19:17 -- scripts/common.sh@365 -- # decimal 0 00:22:44.304 10:19:17 -- scripts/common.sh@352 -- # local d=0 00:22:44.304 10:19:17 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:44.304 10:19:17 -- scripts/common.sh@354 -- # echo 0 00:22:44.304 10:19:17 -- scripts/common.sh@365 -- # ver2[v]=0 00:22:44.304 10:19:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:44.304 10:19:17 -- scripts/common.sh@366 -- # return 0 00:22:44.304 10:19:17 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:44.304 10:19:17 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:44.304 10:19:17 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:44.304 10:19:17 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:44.304 10:19:17 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:44.304 10:19:17 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:44.304 10:19:17 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:44.304 10:19:17 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:22:44.304 10:19:17 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:22:44.304 10:19:17 -- fips/fips.sh@114 -- # build_openssl_config 00:22:44.304 10:19:17 -- fips/fips.sh@37 -- # cat 00:22:44.304 10:19:17 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:44.304 10:19:17 -- fips/fips.sh@58 -- # cat - 00:22:44.304 10:19:17 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:44.304 10:19:17 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:22:44.304 10:19:17 -- fips/fips.sh@117 -- # mapfile -t providers 00:22:44.304 10:19:17 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:22:44.304 10:19:17 -- fips/fips.sh@117 -- # openssl list -providers 00:22:44.304 10:19:17 -- fips/fips.sh@117 -- # grep name 00:22:44.304 10:19:17 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:22:44.304 10:19:17 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:22:44.304 10:19:17 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:44.304 10:19:17 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:22:44.304 10:19:17 -- common/autotest_common.sh@640 -- # local es=0 00:22:44.304 10:19:17 -- fips/fips.sh@128 -- # : 00:22:44.304 10:19:17 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:44.304 10:19:17 -- common/autotest_common.sh@628 -- # local arg=openssl 00:22:44.304 10:19:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:44.304 10:19:17 -- common/autotest_common.sh@632 -- # type -t openssl 00:22:44.304 10:19:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:44.304 10:19:17 -- common/autotest_common.sh@634 -- # type -P openssl 00:22:44.304 10:19:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:44.304 10:19:17 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:22:44.304 10:19:17 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:22:44.304 10:19:17 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:22:44.563 Error setting digest 00:22:44.563 0032A912407F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:44.563 0032A912407F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:44.563 10:19:17 -- common/autotest_common.sh@643 -- # es=1 00:22:44.563 10:19:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:44.563 10:19:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:44.563 10:19:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:44.563 10:19:17 -- fips/fips.sh@131 -- # nvmftestinit 00:22:44.563 10:19:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:44.563 10:19:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:44.563 10:19:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:44.563 10:19:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:44.563 10:19:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:44.563 10:19:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.563 10:19:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:44.563 10:19:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.563 10:19:17 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:44.563 10:19:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:44.563 10:19:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:44.563 10:19:17 -- common/autotest_common.sh@10 -- # set +x 00:22:49.826 10:19:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:49.826 10:19:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:49.826 10:19:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:49.826 10:19:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:49.826 10:19:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:49.826 10:19:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:49.826 10:19:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:49.826 10:19:23 -- nvmf/common.sh@294 -- # net_devs=() 00:22:49.826 10:19:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:49.826 10:19:23 -- nvmf/common.sh@295 -- # e810=() 00:22:49.826 10:19:23 -- nvmf/common.sh@295 -- # local -ga e810 00:22:49.826 10:19:23 -- nvmf/common.sh@296 -- # x722=() 00:22:49.826 10:19:23 -- nvmf/common.sh@296 -- # local -ga x722 00:22:49.826 10:19:23 -- nvmf/common.sh@297 -- # mlx=() 00:22:49.826 10:19:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:49.826 10:19:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.826 10:19:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.826 10:19:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.826 10:19:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.826 10:19:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.826 10:19:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.826 10:19:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.826 10:19:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.826 10:19:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.826 10:19:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.826 10:19:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.826 10:19:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:49.826 10:19:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:49.826 10:19:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:49.826 10:19:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:49.826 10:19:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:49.826 10:19:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:49.826 10:19:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:49.826 10:19:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:49.826 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:49.826 10:19:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:49.826 10:19:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:49.826 10:19:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.826 10:19:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.826 10:19:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:49.826 10:19:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:49.826 10:19:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:49.826 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:49.826 10:19:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:49.826 10:19:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:49.826 10:19:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.826 10:19:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.826 10:19:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:49.826 10:19:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:49.826 10:19:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:49.826 10:19:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:49.826 10:19:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:49.826 10:19:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.826 10:19:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:49.826 10:19:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.826 10:19:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:49.826 Found net devices under 0000:af:00.0: cvl_0_0 00:22:49.826 10:19:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.826 10:19:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:49.826 10:19:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.826 10:19:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:49.826 10:19:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.826 10:19:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:49.826 Found net devices under 0000:af:00.1: cvl_0_1 00:22:49.826 10:19:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.826 10:19:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:49.826 10:19:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:49.826 10:19:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:49.826 10:19:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:49.826 10:19:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:49.826 10:19:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.826 10:19:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.826 10:19:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.826 10:19:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:49.826 10:19:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.826 10:19:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.827 10:19:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:49.827 10:19:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.827 10:19:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.827 10:19:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:49.827 10:19:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:49.827 10:19:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.827 10:19:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.085 10:19:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.085 10:19:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.085 10:19:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:50.085 10:19:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.085 10:19:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.085 10:19:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.085 10:19:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:50.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:22:50.085 00:22:50.085 --- 10.0.0.2 ping statistics --- 00:22:50.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.085 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:22:50.085 10:19:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:22:50.085 00:22:50.085 --- 10.0.0.1 ping statistics --- 00:22:50.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.085 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:22:50.085 10:19:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.085 10:19:23 -- nvmf/common.sh@410 -- # return 0 00:22:50.085 10:19:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:50.085 10:19:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.085 10:19:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:50.085 10:19:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:50.085 10:19:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.085 10:19:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:50.085 10:19:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:50.085 10:19:23 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:22:50.085 10:19:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:50.085 10:19:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:50.085 10:19:23 -- common/autotest_common.sh@10 -- # set +x 00:22:50.085 10:19:23 -- nvmf/common.sh@469 -- # nvmfpid=3503587 00:22:50.085 10:19:23 -- nvmf/common.sh@470 -- # waitforlisten 3503587 00:22:50.085 10:19:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:50.085 10:19:23 -- common/autotest_common.sh@819 -- # '[' -z 3503587 ']' 00:22:50.085 10:19:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.085 10:19:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:50.085 10:19:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.085 10:19:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:50.085 10:19:23 -- common/autotest_common.sh@10 -- # set +x 00:22:50.343 [2024-04-17 10:19:23.478486] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:50.343 [2024-04-17 10:19:23.478546] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.343 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.343 [2024-04-17 10:19:23.555463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.343 [2024-04-17 10:19:23.640704] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:50.343 [2024-04-17 10:19:23.640846] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.343 [2024-04-17 10:19:23.640858] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.343 [2024-04-17 10:19:23.640867] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.343 [2024-04-17 10:19:23.640893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.275 10:19:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:51.275 10:19:24 -- common/autotest_common.sh@852 -- # return 0 00:22:51.275 10:19:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:51.275 10:19:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:51.275 10:19:24 -- common/autotest_common.sh@10 -- # set +x 00:22:51.275 10:19:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.275 10:19:24 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:22:51.275 10:19:24 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:51.275 10:19:24 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:51.275 10:19:24 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:51.275 10:19:24 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:51.275 10:19:24 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:51.275 10:19:24 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:51.275 10:19:24 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:51.533 [2024-04-17 10:19:24.635169] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.533 [2024-04-17 10:19:24.651143] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:51.533 [2024-04-17 10:19:24.651361] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.533 malloc0 00:22:51.533 10:19:24 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:51.533 10:19:24 -- fips/fips.sh@148 -- # bdevperf_pid=3503765 00:22:51.533 10:19:24 -- fips/fips.sh@149 -- # waitforlisten 3503765 /var/tmp/bdevperf.sock 00:22:51.533 10:19:24 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:51.533 10:19:24 -- common/autotest_common.sh@819 -- # '[' -z 3503765 ']' 00:22:51.533 10:19:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:51.533 10:19:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:51.533 10:19:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:51.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:51.533 10:19:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:51.533 10:19:24 -- common/autotest_common.sh@10 -- # set +x 00:22:51.533 [2024-04-17 10:19:24.770604] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:51.533 [2024-04-17 10:19:24.770670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3503765 ] 00:22:51.533 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.533 [2024-04-17 10:19:24.827990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.790 [2024-04-17 10:19:24.893534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.354 10:19:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:52.354 10:19:25 -- common/autotest_common.sh@852 -- # return 0 00:22:52.354 10:19:25 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:52.611 [2024-04-17 10:19:25.898347] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:52.868 TLSTESTn1 00:22:52.868 10:19:25 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:52.868 Running I/O for 10 seconds... 00:23:02.893 00:23:02.893 Latency(us) 00:23:02.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.893 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:02.893 Verification LBA range: start 0x0 length 0x2000 00:23:02.893 TLSTESTn1 : 10.01 4364.75 17.05 0.00 0.00 29299.16 5987.61 47662.55 00:23:02.893 =================================================================================================================== 00:23:02.893 Total : 4364.75 17.05 0.00 0.00 29299.16 5987.61 47662.55 00:23:02.893 0 00:23:02.893 10:19:36 -- fips/fips.sh@1 -- # cleanup 00:23:02.893 10:19:36 -- fips/fips.sh@15 -- # process_shm --id 0 00:23:02.893 10:19:36 -- common/autotest_common.sh@796 -- # type=--id 00:23:02.893 10:19:36 -- common/autotest_common.sh@797 -- # id=0 00:23:02.893 10:19:36 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:23:02.893 10:19:36 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:02.893 10:19:36 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:23:02.893 10:19:36 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:23:02.893 10:19:36 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:23:02.893 10:19:36 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:02.893 nvmf_trace.0 00:23:03.151 10:19:36 -- common/autotest_common.sh@811 -- # return 0 00:23:03.151 10:19:36 -- fips/fips.sh@16 -- # killprocess 3503765 00:23:03.151 10:19:36 -- common/autotest_common.sh@926 -- # '[' -z 3503765 ']' 00:23:03.151 10:19:36 -- common/autotest_common.sh@930 -- # kill -0 3503765 00:23:03.151 10:19:36 -- common/autotest_common.sh@931 -- # uname 00:23:03.151 10:19:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:03.151 10:19:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3503765 00:23:03.151 10:19:36 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:23:03.151 10:19:36 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:23:03.151 10:19:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3503765' 00:23:03.151 killing process with pid 3503765 00:23:03.151 10:19:36 -- common/autotest_common.sh@945 -- # kill 3503765 00:23:03.151 Received shutdown signal, test time was about 10.000000 seconds 00:23:03.151 00:23:03.151 Latency(us) 00:23:03.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.151 =================================================================================================================== 00:23:03.151 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:03.151 10:19:36 -- common/autotest_common.sh@950 -- # wait 3503765 00:23:03.409 10:19:36 -- fips/fips.sh@17 -- # nvmftestfini 00:23:03.409 10:19:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:03.409 10:19:36 -- nvmf/common.sh@116 -- # sync 00:23:03.409 10:19:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:03.409 10:19:36 -- nvmf/common.sh@119 -- # set +e 00:23:03.409 10:19:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:03.409 10:19:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:03.409 rmmod nvme_tcp 00:23:03.409 rmmod nvme_fabrics 00:23:03.409 rmmod nvme_keyring 00:23:03.409 10:19:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:03.409 10:19:36 -- nvmf/common.sh@123 -- # set -e 00:23:03.409 10:19:36 -- nvmf/common.sh@124 -- # return 0 00:23:03.409 10:19:36 -- nvmf/common.sh@477 -- # '[' -n 3503587 ']' 00:23:03.409 10:19:36 -- nvmf/common.sh@478 -- # killprocess 3503587 00:23:03.409 10:19:36 -- common/autotest_common.sh@926 -- # '[' -z 3503587 ']' 00:23:03.409 10:19:36 -- common/autotest_common.sh@930 -- # kill -0 3503587 00:23:03.409 10:19:36 -- common/autotest_common.sh@931 -- # uname 00:23:03.409 10:19:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:03.409 10:19:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3503587 00:23:03.409 10:19:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:03.409 10:19:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:03.409 10:19:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3503587' 00:23:03.409 killing process with pid 3503587 00:23:03.409 10:19:36 -- common/autotest_common.sh@945 -- # kill 3503587 00:23:03.409 10:19:36 -- common/autotest_common.sh@950 -- # wait 3503587 00:23:03.667 10:19:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:03.667 10:19:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:03.667 10:19:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:03.667 10:19:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:03.667 10:19:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:03.667 10:19:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.667 10:19:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:03.667 10:19:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.193 10:19:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:06.193 10:19:38 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:06.193 00:23:06.193 real 0m21.545s 00:23:06.193 user 0m23.665s 00:23:06.193 sys 0m9.306s 00:23:06.193 10:19:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:06.193 10:19:38 -- common/autotest_common.sh@10 -- # set +x 00:23:06.193 ************************************ 00:23:06.193 END TEST nvmf_fips 00:23:06.193 ************************************ 00:23:06.193 10:19:38 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:23:06.193 10:19:38 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:06.193 10:19:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:06.193 10:19:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:06.193 10:19:38 -- common/autotest_common.sh@10 -- # set +x 00:23:06.193 ************************************ 00:23:06.193 START TEST nvmf_fuzz 00:23:06.193 ************************************ 00:23:06.193 10:19:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:06.193 * Looking for test storage... 00:23:06.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:06.193 10:19:39 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:06.193 10:19:39 -- nvmf/common.sh@7 -- # uname -s 00:23:06.193 10:19:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:06.193 10:19:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:06.193 10:19:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:06.193 10:19:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:06.193 10:19:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:06.193 10:19:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:06.193 10:19:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:06.193 10:19:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:06.193 10:19:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:06.193 10:19:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:06.193 10:19:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:06.193 10:19:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:06.193 10:19:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:06.193 10:19:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:06.193 10:19:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:06.193 10:19:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:06.193 10:19:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.193 10:19:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.193 10:19:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.193 10:19:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.193 10:19:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.193 10:19:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.193 10:19:39 -- paths/export.sh@5 -- # export PATH 00:23:06.193 10:19:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.193 10:19:39 -- nvmf/common.sh@46 -- # : 0 00:23:06.193 10:19:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:06.193 10:19:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:06.193 10:19:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:06.193 10:19:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.193 10:19:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.193 10:19:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:06.193 10:19:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:06.193 10:19:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:06.193 10:19:39 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:06.193 10:19:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:06.193 10:19:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.193 10:19:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:06.193 10:19:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:06.193 10:19:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:06.193 10:19:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.193 10:19:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:06.193 10:19:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.193 10:19:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:06.193 10:19:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:06.193 10:19:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:06.193 10:19:39 -- common/autotest_common.sh@10 -- # set +x 00:23:11.464 10:19:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:11.464 10:19:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:11.464 10:19:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:11.464 10:19:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:11.464 10:19:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:11.464 10:19:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:11.464 10:19:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:11.464 10:19:44 -- nvmf/common.sh@294 -- # net_devs=() 00:23:11.464 10:19:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:11.464 10:19:44 -- nvmf/common.sh@295 -- # e810=() 00:23:11.464 10:19:44 -- nvmf/common.sh@295 -- # local -ga e810 00:23:11.464 10:19:44 -- nvmf/common.sh@296 -- # x722=() 00:23:11.464 10:19:44 -- nvmf/common.sh@296 -- # local -ga x722 00:23:11.464 10:19:44 -- nvmf/common.sh@297 -- # mlx=() 00:23:11.464 10:19:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:11.464 10:19:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:11.464 10:19:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:11.464 10:19:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:11.464 10:19:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:11.464 10:19:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:11.464 10:19:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:11.464 10:19:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:11.464 10:19:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:11.464 10:19:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:11.464 10:19:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:11.464 10:19:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:11.464 10:19:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:11.464 10:19:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:11.464 10:19:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:11.464 10:19:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:11.464 10:19:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:11.464 10:19:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:11.464 10:19:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:11.464 10:19:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:11.464 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:11.464 10:19:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:11.464 10:19:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:11.464 10:19:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.464 10:19:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.464 10:19:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:11.464 10:19:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:11.464 10:19:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:11.464 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:11.464 10:19:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:11.464 10:19:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:11.464 10:19:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.464 10:19:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.464 10:19:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:11.464 10:19:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:11.464 10:19:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:11.464 10:19:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:11.464 10:19:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:11.464 10:19:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.464 10:19:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:11.464 10:19:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.464 10:19:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:11.464 Found net devices under 0000:af:00.0: cvl_0_0 00:23:11.464 10:19:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.464 10:19:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:11.464 10:19:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.464 10:19:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:11.464 10:19:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.464 10:19:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:11.464 Found net devices under 0000:af:00.1: cvl_0_1 00:23:11.464 10:19:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.464 10:19:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:11.464 10:19:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:11.464 10:19:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:11.464 10:19:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:11.464 10:19:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:11.464 10:19:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:11.464 10:19:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:11.464 10:19:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:11.464 10:19:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:11.464 10:19:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:11.464 10:19:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:11.464 10:19:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:11.464 10:19:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:11.464 10:19:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:11.464 10:19:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:11.464 10:19:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:11.464 10:19:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:11.464 10:19:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:11.464 10:19:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:11.464 10:19:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:11.464 10:19:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:11.464 10:19:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:11.464 10:19:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:11.464 10:19:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:11.464 10:19:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:11.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:11.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:23:11.464 00:23:11.464 --- 10.0.0.2 ping statistics --- 00:23:11.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.464 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:23:11.464 10:19:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:11.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:11.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:23:11.464 00:23:11.464 --- 10.0.0.1 ping statistics --- 00:23:11.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.464 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:23:11.464 10:19:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:11.464 10:19:44 -- nvmf/common.sh@410 -- # return 0 00:23:11.464 10:19:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:11.464 10:19:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:11.464 10:19:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:11.464 10:19:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:11.464 10:19:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:11.464 10:19:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:11.464 10:19:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:11.464 10:19:44 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3509484 00:23:11.464 10:19:44 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:11.464 10:19:44 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:11.464 10:19:44 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3509484 00:23:11.464 10:19:44 -- common/autotest_common.sh@819 -- # '[' -z 3509484 ']' 00:23:11.464 10:19:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.464 10:19:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:11.464 10:19:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.465 10:19:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:11.465 10:19:44 -- common/autotest_common.sh@10 -- # set +x 00:23:12.843 10:19:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:12.843 10:19:45 -- common/autotest_common.sh@852 -- # return 0 00:23:12.843 10:19:45 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:12.843 10:19:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.843 10:19:45 -- common/autotest_common.sh@10 -- # set +x 00:23:12.843 10:19:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.843 10:19:45 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:12.843 10:19:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.843 10:19:45 -- common/autotest_common.sh@10 -- # set +x 00:23:12.843 Malloc0 00:23:12.843 10:19:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.843 10:19:45 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:12.843 10:19:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.843 10:19:45 -- common/autotest_common.sh@10 -- # set +x 00:23:12.843 10:19:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.843 10:19:45 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:12.843 10:19:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.843 10:19:45 -- common/autotest_common.sh@10 -- # set +x 00:23:12.843 10:19:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.843 10:19:45 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:12.843 10:19:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.843 10:19:45 -- common/autotest_common.sh@10 -- # set +x 00:23:12.843 10:19:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.843 10:19:45 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:12.843 10:19:45 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:23:44.927 Fuzzing completed. Shutting down the fuzz application 00:23:44.927 00:23:44.927 Dumping successful admin opcodes: 00:23:44.927 8, 9, 10, 24, 00:23:44.927 Dumping successful io opcodes: 00:23:44.927 0, 9, 00:23:44.927 NS: 0x200003aeff00 I/O qp, Total commands completed: 622061, total successful commands: 3626, random_seed: 3458296704 00:23:44.927 NS: 0x200003aeff00 admin qp, Total commands completed: 66959, total successful commands: 529, random_seed: 1429662848 00:23:44.927 10:20:16 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:23:44.927 Fuzzing completed. Shutting down the fuzz application 00:23:44.927 00:23:44.927 Dumping successful admin opcodes: 00:23:44.927 24, 00:23:44.927 Dumping successful io opcodes: 00:23:44.927 00:23:44.927 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1278267414 00:23:44.927 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1278384274 00:23:44.927 10:20:17 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:44.927 10:20:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:44.927 10:20:17 -- common/autotest_common.sh@10 -- # set +x 00:23:44.927 10:20:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:44.927 10:20:17 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:44.927 10:20:17 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:23:44.927 10:20:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:44.927 10:20:17 -- nvmf/common.sh@116 -- # sync 00:23:44.927 10:20:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:44.927 10:20:17 -- nvmf/common.sh@119 -- # set +e 00:23:44.927 10:20:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:44.927 10:20:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:44.927 rmmod nvme_tcp 00:23:44.927 rmmod nvme_fabrics 00:23:44.927 rmmod nvme_keyring 00:23:44.927 10:20:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:44.927 10:20:17 -- nvmf/common.sh@123 -- # set -e 00:23:44.927 10:20:17 -- nvmf/common.sh@124 -- # return 0 00:23:44.927 10:20:17 -- nvmf/common.sh@477 -- # '[' -n 3509484 ']' 00:23:44.927 10:20:17 -- nvmf/common.sh@478 -- # killprocess 3509484 00:23:44.927 10:20:17 -- common/autotest_common.sh@926 -- # '[' -z 3509484 ']' 00:23:44.927 10:20:17 -- common/autotest_common.sh@930 -- # kill -0 3509484 00:23:44.927 10:20:17 -- common/autotest_common.sh@931 -- # uname 00:23:44.927 10:20:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:44.927 10:20:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3509484 00:23:44.927 10:20:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:44.927 10:20:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:44.927 10:20:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3509484' 00:23:44.927 killing process with pid 3509484 00:23:44.927 10:20:17 -- common/autotest_common.sh@945 -- # kill 3509484 00:23:44.927 10:20:17 -- common/autotest_common.sh@950 -- # wait 3509484 00:23:44.927 10:20:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:44.927 10:20:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:44.927 10:20:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:44.927 10:20:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:44.927 10:20:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:44.927 10:20:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.927 10:20:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:44.927 10:20:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.831 10:20:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:46.831 10:20:20 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:23:47.090 00:23:47.090 real 0m41.231s 00:23:47.090 user 0m54.603s 00:23:47.090 sys 0m16.102s 00:23:47.090 10:20:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:47.090 10:20:20 -- common/autotest_common.sh@10 -- # set +x 00:23:47.090 ************************************ 00:23:47.090 END TEST nvmf_fuzz 00:23:47.090 ************************************ 00:23:47.090 10:20:20 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:47.090 10:20:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:47.090 10:20:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:47.090 10:20:20 -- common/autotest_common.sh@10 -- # set +x 00:23:47.090 ************************************ 00:23:47.090 START TEST nvmf_multiconnection 00:23:47.090 ************************************ 00:23:47.090 10:20:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:47.090 * Looking for test storage... 00:23:47.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:47.090 10:20:20 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:47.090 10:20:20 -- nvmf/common.sh@7 -- # uname -s 00:23:47.090 10:20:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.090 10:20:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.090 10:20:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.090 10:20:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.090 10:20:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.090 10:20:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.090 10:20:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.090 10:20:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.090 10:20:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.090 10:20:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.090 10:20:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:47.090 10:20:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:47.090 10:20:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.090 10:20:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.090 10:20:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:47.090 10:20:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:47.090 10:20:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.090 10:20:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.090 10:20:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.090 10:20:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.091 10:20:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.091 10:20:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.091 10:20:20 -- paths/export.sh@5 -- # export PATH 00:23:47.091 10:20:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.091 10:20:20 -- nvmf/common.sh@46 -- # : 0 00:23:47.091 10:20:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:47.091 10:20:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:47.091 10:20:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:47.091 10:20:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.091 10:20:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.091 10:20:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:47.091 10:20:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:47.091 10:20:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:47.091 10:20:20 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:47.091 10:20:20 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:47.091 10:20:20 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:23:47.091 10:20:20 -- target/multiconnection.sh@16 -- # nvmftestinit 00:23:47.091 10:20:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:47.091 10:20:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:47.091 10:20:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:47.091 10:20:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:47.091 10:20:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:47.091 10:20:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.091 10:20:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:47.091 10:20:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.091 10:20:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:47.091 10:20:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:47.091 10:20:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:47.091 10:20:20 -- common/autotest_common.sh@10 -- # set +x 00:23:53.660 10:20:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:53.660 10:20:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:53.660 10:20:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:53.660 10:20:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:53.660 10:20:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:53.660 10:20:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:53.660 10:20:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:53.660 10:20:25 -- nvmf/common.sh@294 -- # net_devs=() 00:23:53.660 10:20:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:53.660 10:20:25 -- nvmf/common.sh@295 -- # e810=() 00:23:53.660 10:20:25 -- nvmf/common.sh@295 -- # local -ga e810 00:23:53.660 10:20:25 -- nvmf/common.sh@296 -- # x722=() 00:23:53.660 10:20:25 -- nvmf/common.sh@296 -- # local -ga x722 00:23:53.660 10:20:25 -- nvmf/common.sh@297 -- # mlx=() 00:23:53.660 10:20:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:53.660 10:20:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:53.660 10:20:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:53.660 10:20:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:53.660 10:20:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:53.660 10:20:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:53.660 10:20:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:53.660 10:20:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:53.660 10:20:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:53.660 10:20:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:53.660 10:20:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:53.660 10:20:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:53.660 10:20:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:53.660 10:20:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:53.660 10:20:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:53.660 10:20:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:53.660 10:20:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:53.660 10:20:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:53.660 10:20:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:53.660 10:20:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:53.660 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:53.660 10:20:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:53.660 10:20:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:53.660 10:20:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.660 10:20:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.660 10:20:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:53.660 10:20:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:53.660 10:20:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:53.660 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:53.660 10:20:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:53.660 10:20:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:53.660 10:20:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.660 10:20:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.660 10:20:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:53.660 10:20:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:53.660 10:20:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:53.660 10:20:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:53.660 10:20:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:53.660 10:20:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.660 10:20:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:53.660 10:20:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.660 10:20:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:53.660 Found net devices under 0000:af:00.0: cvl_0_0 00:23:53.660 10:20:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.660 10:20:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:53.660 10:20:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.660 10:20:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:53.660 10:20:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.660 10:20:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:53.660 Found net devices under 0000:af:00.1: cvl_0_1 00:23:53.660 10:20:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.660 10:20:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:53.660 10:20:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:53.660 10:20:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:53.660 10:20:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:53.660 10:20:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:53.660 10:20:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:53.660 10:20:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:53.660 10:20:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:53.660 10:20:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:53.660 10:20:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:53.660 10:20:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:53.660 10:20:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:53.660 10:20:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:53.660 10:20:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:53.660 10:20:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:53.660 10:20:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:53.660 10:20:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:53.660 10:20:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:53.660 10:20:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:53.660 10:20:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:53.660 10:20:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:53.660 10:20:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:53.660 10:20:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:53.660 10:20:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:53.660 10:20:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:53.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:53.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:23:53.660 00:23:53.660 --- 10.0.0.2 ping statistics --- 00:23:53.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.660 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:23:53.660 10:20:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:53.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:53.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:23:53.660 00:23:53.660 --- 10.0.0.1 ping statistics --- 00:23:53.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.660 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:23:53.660 10:20:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:53.660 10:20:26 -- nvmf/common.sh@410 -- # return 0 00:23:53.660 10:20:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:53.660 10:20:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:53.660 10:20:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:53.660 10:20:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:53.660 10:20:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:53.660 10:20:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:53.660 10:20:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:53.660 10:20:26 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:23:53.660 10:20:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:53.660 10:20:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:53.660 10:20:26 -- common/autotest_common.sh@10 -- # set +x 00:23:53.660 10:20:26 -- nvmf/common.sh@469 -- # nvmfpid=3519045 00:23:53.660 10:20:26 -- nvmf/common.sh@470 -- # waitforlisten 3519045 00:23:53.660 10:20:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:53.660 10:20:26 -- common/autotest_common.sh@819 -- # '[' -z 3519045 ']' 00:23:53.660 10:20:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.660 10:20:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:53.660 10:20:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.660 10:20:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:53.660 10:20:26 -- common/autotest_common.sh@10 -- # set +x 00:23:53.660 [2024-04-17 10:20:26.161339] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:53.660 [2024-04-17 10:20:26.161380] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.660 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.660 [2024-04-17 10:20:26.234367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:53.660 [2024-04-17 10:20:26.320984] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:53.660 [2024-04-17 10:20:26.321135] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.660 [2024-04-17 10:20:26.321147] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.660 [2024-04-17 10:20:26.321156] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.660 [2024-04-17 10:20:26.321257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.660 [2024-04-17 10:20:26.321357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.660 [2024-04-17 10:20:26.321472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:53.660 [2024-04-17 10:20:26.321472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.919 10:20:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:53.919 10:20:27 -- common/autotest_common.sh@852 -- # return 0 00:23:53.920 10:20:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:53.920 10:20:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:53.920 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:53.920 10:20:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.920 10:20:27 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:53.920 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.920 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:53.920 [2024-04-17 10:20:27.143486] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.920 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.920 10:20:27 -- target/multiconnection.sh@21 -- # seq 1 11 00:23:53.920 10:20:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:53.920 10:20:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:53.920 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.920 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:53.920 Malloc1 00:23:53.920 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.920 10:20:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:23:53.920 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.920 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:53.920 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.920 10:20:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:53.920 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.920 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:53.920 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.920 10:20:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:53.920 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.920 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:53.920 [2024-04-17 10:20:27.203200] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.920 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.920 10:20:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:53.920 10:20:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:23:53.920 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.920 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:53.920 Malloc2 00:23:53.920 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.920 10:20:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:53.920 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.920 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:53.920 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.920 10:20:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:23:53.920 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.920 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:53.920 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.920 10:20:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:53.920 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.920 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:53.920 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:53.920 10:20:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:53.920 10:20:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:23:53.920 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:53.920 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.180 Malloc3 00:23:54.180 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.180 10:20:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:23:54.180 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.180 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.180 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.180 10:20:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:23:54.180 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.180 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.180 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.180 10:20:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:23:54.180 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.180 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.180 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.180 10:20:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:54.180 10:20:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:23:54.180 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.180 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.180 Malloc4 00:23:54.180 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.180 10:20:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:23:54.180 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.180 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.180 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.180 10:20:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:23:54.180 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.180 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.180 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.180 10:20:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:23:54.180 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.180 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.180 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.180 10:20:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:54.180 10:20:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:23:54.180 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.180 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.180 Malloc5 00:23:54.180 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.180 10:20:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:23:54.180 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.180 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.180 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.180 10:20:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:23:54.180 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.180 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.180 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.180 10:20:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:23:54.180 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.180 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.180 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.180 10:20:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:54.180 10:20:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:23:54.180 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.180 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.180 Malloc6 00:23:54.180 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.180 10:20:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:23:54.180 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.180 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.180 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.180 10:20:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:23:54.180 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.180 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.180 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.180 10:20:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:23:54.180 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.180 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.180 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.180 10:20:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:54.180 10:20:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:23:54.180 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.180 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.180 Malloc7 00:23:54.180 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.180 10:20:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:23:54.180 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.180 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.180 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.180 10:20:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:23:54.180 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.180 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.180 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.180 10:20:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:23:54.180 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.180 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.180 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.180 10:20:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:54.180 10:20:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:23:54.180 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.181 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.181 Malloc8 00:23:54.181 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.181 10:20:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:23:54.181 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.181 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.181 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.181 10:20:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:23:54.181 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.181 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.440 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.440 10:20:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:23:54.440 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.440 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.440 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.440 10:20:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:54.440 10:20:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:23:54.440 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.440 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.440 Malloc9 00:23:54.440 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.440 10:20:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:23:54.440 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.440 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.440 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.440 10:20:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:23:54.440 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.440 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.440 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.440 10:20:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:23:54.440 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.440 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.440 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.440 10:20:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:54.440 10:20:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:23:54.440 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.440 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.440 Malloc10 00:23:54.440 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.440 10:20:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:23:54.440 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.440 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.440 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.440 10:20:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:23:54.440 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.440 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.440 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.440 10:20:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:23:54.440 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.440 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.440 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.440 10:20:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:54.440 10:20:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:23:54.440 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.440 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.440 Malloc11 00:23:54.440 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.440 10:20:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:23:54.440 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.440 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.440 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.440 10:20:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:23:54.440 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.440 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.440 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.440 10:20:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:23:54.440 10:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.440 10:20:27 -- common/autotest_common.sh@10 -- # set +x 00:23:54.440 10:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.441 10:20:27 -- target/multiconnection.sh@28 -- # seq 1 11 00:23:54.441 10:20:27 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:54.441 10:20:27 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:55.817 10:20:29 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:23:55.817 10:20:29 -- common/autotest_common.sh@1177 -- # local i=0 00:23:55.817 10:20:29 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:55.817 10:20:29 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:55.817 10:20:29 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:57.723 10:20:31 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:57.723 10:20:31 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:57.723 10:20:31 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:23:57.723 10:20:31 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:57.723 10:20:31 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:57.723 10:20:31 -- common/autotest_common.sh@1187 -- # return 0 00:23:57.723 10:20:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:57.723 10:20:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:23:59.099 10:20:32 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:23:59.099 10:20:32 -- common/autotest_common.sh@1177 -- # local i=0 00:23:59.099 10:20:32 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:59.099 10:20:32 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:59.099 10:20:32 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:01.633 10:20:34 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:01.633 10:20:34 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:01.633 10:20:34 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:24:01.633 10:20:34 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:01.633 10:20:34 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:01.633 10:20:34 -- common/autotest_common.sh@1187 -- # return 0 00:24:01.633 10:20:34 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.633 10:20:34 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:02.569 10:20:35 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:02.569 10:20:35 -- common/autotest_common.sh@1177 -- # local i=0 00:24:02.569 10:20:35 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:02.569 10:20:35 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:02.569 10:20:35 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:04.475 10:20:37 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:04.475 10:20:37 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:04.475 10:20:37 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:24:04.475 10:20:37 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:04.475 10:20:37 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:04.475 10:20:37 -- common/autotest_common.sh@1187 -- # return 0 00:24:04.475 10:20:37 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:04.475 10:20:37 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:05.852 10:20:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:05.852 10:20:39 -- common/autotest_common.sh@1177 -- # local i=0 00:24:05.852 10:20:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:05.852 10:20:39 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:05.852 10:20:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:08.395 10:20:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:08.395 10:20:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:08.395 10:20:41 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:24:08.395 10:20:41 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:08.395 10:20:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:08.395 10:20:41 -- common/autotest_common.sh@1187 -- # return 0 00:24:08.395 10:20:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:08.395 10:20:41 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:09.388 10:20:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:09.388 10:20:42 -- common/autotest_common.sh@1177 -- # local i=0 00:24:09.388 10:20:42 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:09.388 10:20:42 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:09.388 10:20:42 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:11.297 10:20:44 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:11.556 10:20:44 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:11.556 10:20:44 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:24:11.556 10:20:44 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:11.556 10:20:44 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:11.556 10:20:44 -- common/autotest_common.sh@1187 -- # return 0 00:24:11.556 10:20:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:11.556 10:20:44 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:12.931 10:20:46 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:12.931 10:20:46 -- common/autotest_common.sh@1177 -- # local i=0 00:24:12.931 10:20:46 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:12.931 10:20:46 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:12.931 10:20:46 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:14.831 10:20:48 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:14.831 10:20:48 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:14.831 10:20:48 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:24:14.831 10:20:48 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:14.831 10:20:48 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:14.831 10:20:48 -- common/autotest_common.sh@1187 -- # return 0 00:24:14.831 10:20:48 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:14.831 10:20:48 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:16.734 10:20:49 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:16.734 10:20:49 -- common/autotest_common.sh@1177 -- # local i=0 00:24:16.734 10:20:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:16.734 10:20:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:16.734 10:20:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:18.634 10:20:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:18.634 10:20:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:18.634 10:20:51 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:24:18.634 10:20:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:18.634 10:20:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:18.634 10:20:51 -- common/autotest_common.sh@1187 -- # return 0 00:24:18.634 10:20:51 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.634 10:20:51 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:20.012 10:20:53 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:20.012 10:20:53 -- common/autotest_common.sh@1177 -- # local i=0 00:24:20.012 10:20:53 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:20.012 10:20:53 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:20.012 10:20:53 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:21.914 10:20:55 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:21.914 10:20:55 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:21.914 10:20:55 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:24:21.914 10:20:55 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:21.914 10:20:55 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:21.914 10:20:55 -- common/autotest_common.sh@1187 -- # return 0 00:24:21.914 10:20:55 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.914 10:20:55 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:23.817 10:20:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:23.817 10:20:56 -- common/autotest_common.sh@1177 -- # local i=0 00:24:23.817 10:20:56 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:23.817 10:20:56 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:23.817 10:20:56 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:25.720 10:20:58 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:25.720 10:20:58 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:25.720 10:20:58 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:24:25.720 10:20:58 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:25.720 10:20:58 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:25.720 10:20:58 -- common/autotest_common.sh@1187 -- # return 0 00:24:25.720 10:20:58 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:25.720 10:20:58 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:27.624 10:21:00 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:27.624 10:21:00 -- common/autotest_common.sh@1177 -- # local i=0 00:24:27.624 10:21:00 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:27.624 10:21:00 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:27.624 10:21:00 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:29.528 10:21:02 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:29.528 10:21:02 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:29.528 10:21:02 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:24:29.528 10:21:02 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:29.528 10:21:02 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:29.528 10:21:02 -- common/autotest_common.sh@1187 -- # return 0 00:24:29.528 10:21:02 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:29.528 10:21:02 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:24:30.905 10:21:04 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:30.905 10:21:04 -- common/autotest_common.sh@1177 -- # local i=0 00:24:30.905 10:21:04 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:30.905 10:21:04 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:30.905 10:21:04 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:32.808 10:21:06 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:32.808 10:21:06 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:32.808 10:21:06 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:24:32.808 10:21:06 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:32.808 10:21:06 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:32.808 10:21:06 -- common/autotest_common.sh@1187 -- # return 0 00:24:32.808 10:21:06 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:32.808 [global] 00:24:32.808 thread=1 00:24:32.808 invalidate=1 00:24:32.808 rw=read 00:24:32.808 time_based=1 00:24:32.808 runtime=10 00:24:32.808 ioengine=libaio 00:24:32.808 direct=1 00:24:32.808 bs=262144 00:24:32.808 iodepth=64 00:24:32.808 norandommap=1 00:24:32.808 numjobs=1 00:24:32.808 00:24:32.808 [job0] 00:24:32.808 filename=/dev/nvme0n1 00:24:32.808 [job1] 00:24:32.808 filename=/dev/nvme10n1 00:24:32.808 [job2] 00:24:32.808 filename=/dev/nvme1n1 00:24:32.808 [job3] 00:24:32.808 filename=/dev/nvme2n1 00:24:32.808 [job4] 00:24:32.808 filename=/dev/nvme3n1 00:24:32.808 [job5] 00:24:32.808 filename=/dev/nvme4n1 00:24:32.808 [job6] 00:24:32.808 filename=/dev/nvme5n1 00:24:32.808 [job7] 00:24:32.808 filename=/dev/nvme6n1 00:24:32.808 [job8] 00:24:32.808 filename=/dev/nvme7n1 00:24:32.808 [job9] 00:24:32.808 filename=/dev/nvme8n1 00:24:32.808 [job10] 00:24:32.808 filename=/dev/nvme9n1 00:24:33.067 Could not set queue depth (nvme0n1) 00:24:33.067 Could not set queue depth (nvme10n1) 00:24:33.067 Could not set queue depth (nvme1n1) 00:24:33.067 Could not set queue depth (nvme2n1) 00:24:33.067 Could not set queue depth (nvme3n1) 00:24:33.067 Could not set queue depth (nvme4n1) 00:24:33.067 Could not set queue depth (nvme5n1) 00:24:33.067 Could not set queue depth (nvme6n1) 00:24:33.067 Could not set queue depth (nvme7n1) 00:24:33.067 Could not set queue depth (nvme8n1) 00:24:33.067 Could not set queue depth (nvme9n1) 00:24:33.326 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:33.326 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:33.326 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:33.326 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:33.326 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:33.326 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:33.326 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:33.326 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:33.326 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:33.326 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:33.326 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:33.326 fio-3.35 00:24:33.326 Starting 11 threads 00:24:45.536 00:24:45.536 job0: (groupid=0, jobs=1): err= 0: pid=3526823: Wed Apr 17 10:21:16 2024 00:24:45.536 read: IOPS=582, BW=146MiB/s (153MB/s)(1466MiB/10059msec) 00:24:45.536 slat (usec): min=14, max=74242, avg=1305.12, stdev=4625.28 00:24:45.536 clat (usec): min=1531, max=271966, avg=108322.88, stdev=54846.12 00:24:45.536 lat (usec): min=1567, max=272027, avg=109628.00, stdev=55768.11 00:24:45.536 clat percentiles (msec): 00:24:45.536 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 31], 20.00th=[ 64], 00:24:45.536 | 30.00th=[ 87], 40.00th=[ 94], 50.00th=[ 102], 60.00th=[ 113], 00:24:45.536 | 70.00th=[ 136], 80.00th=[ 165], 90.00th=[ 188], 95.00th=[ 201], 00:24:45.536 | 99.00th=[ 218], 99.50th=[ 226], 99.90th=[ 239], 99.95th=[ 243], 00:24:45.536 | 99.99th=[ 271] 00:24:45.536 bw ( KiB/s): min=77312, max=368128, per=7.62%, avg=148480.00, stdev=67911.88, samples=20 00:24:45.536 iops : min= 302, max= 1438, avg=580.00, stdev=265.28, samples=20 00:24:45.536 lat (msec) : 2=0.03%, 4=0.41%, 10=2.46%, 20=3.96%, 50=9.82% 00:24:45.536 lat (msec) : 100=31.93%, 250=51.37%, 500=0.02% 00:24:45.536 cpu : usr=0.17%, sys=2.24%, ctx=1382, majf=0, minf=4097 00:24:45.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:24:45.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:45.536 issued rwts: total=5863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.537 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:45.537 job1: (groupid=0, jobs=1): err= 0: pid=3526825: Wed Apr 17 10:21:16 2024 00:24:45.537 read: IOPS=467, BW=117MiB/s (123MB/s)(1176MiB/10060msec) 00:24:45.537 slat (usec): min=11, max=103071, avg=1840.54, stdev=5448.48 00:24:45.537 clat (usec): min=1347, max=252724, avg=134883.84, stdev=47442.98 00:24:45.537 lat (usec): min=1374, max=252868, avg=136724.38, stdev=48183.58 00:24:45.537 clat percentiles (msec): 00:24:45.537 | 1.00th=[ 15], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 105], 00:24:45.537 | 30.00th=[ 123], 40.00th=[ 133], 50.00th=[ 146], 60.00th=[ 157], 00:24:45.537 | 70.00th=[ 163], 80.00th=[ 174], 90.00th=[ 186], 95.00th=[ 194], 00:24:45.537 | 99.00th=[ 211], 99.50th=[ 218], 99.90th=[ 226], 99.95th=[ 249], 00:24:45.537 | 99.99th=[ 253] 00:24:45.537 bw ( KiB/s): min=77312, max=337408, per=6.09%, avg=118817.75, stdev=55389.31, samples=20 00:24:45.537 iops : min= 302, max= 1318, avg=464.10, stdev=216.39, samples=20 00:24:45.537 lat (msec) : 2=0.04%, 4=0.17%, 10=0.45%, 20=1.02%, 50=9.74% 00:24:45.537 lat (msec) : 100=6.61%, 250=81.93%, 500=0.04% 00:24:45.537 cpu : usr=0.12%, sys=2.08%, ctx=1066, majf=0, minf=4097 00:24:45.537 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:45.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:45.537 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.537 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:45.537 job2: (groupid=0, jobs=1): err= 0: pid=3526826: Wed Apr 17 10:21:16 2024 00:24:45.537 read: IOPS=549, BW=137MiB/s (144MB/s)(1389MiB/10104msec) 00:24:45.537 slat (usec): min=14, max=103407, avg=1619.67, stdev=5159.38 00:24:45.537 clat (usec): min=975, max=266103, avg=114595.00, stdev=52250.08 00:24:45.537 lat (usec): min=1020, max=271119, avg=116214.67, stdev=53102.93 00:24:45.537 clat percentiles (msec): 00:24:45.537 | 1.00th=[ 13], 5.00th=[ 34], 10.00th=[ 45], 20.00th=[ 64], 00:24:45.537 | 30.00th=[ 86], 40.00th=[ 95], 50.00th=[ 109], 60.00th=[ 134], 00:24:45.537 | 70.00th=[ 150], 80.00th=[ 167], 90.00th=[ 184], 95.00th=[ 199], 00:24:45.537 | 99.00th=[ 215], 99.50th=[ 224], 99.90th=[ 239], 99.95th=[ 257], 00:24:45.537 | 99.99th=[ 266] 00:24:45.537 bw ( KiB/s): min=79872, max=292864, per=7.21%, avg=140637.15, stdev=59250.59, samples=20 00:24:45.537 iops : min= 312, max= 1144, avg=549.35, stdev=231.46, samples=20 00:24:45.537 lat (usec) : 1000=0.02% 00:24:45.537 lat (msec) : 2=0.02%, 4=0.43%, 10=0.32%, 20=0.88%, 50=11.75% 00:24:45.537 lat (msec) : 100=32.36%, 250=54.13%, 500=0.09% 00:24:45.537 cpu : usr=0.27%, sys=2.35%, ctx=1156, majf=0, minf=3221 00:24:45.537 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:45.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:45.537 issued rwts: total=5557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.537 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:45.537 job3: (groupid=0, jobs=1): err= 0: pid=3526827: Wed Apr 17 10:21:16 2024 00:24:45.537 read: IOPS=932, BW=233MiB/s (244MB/s)(2345MiB/10061msec) 00:24:45.537 slat (usec): min=11, max=156921, avg=879.42, stdev=4158.02 00:24:45.537 clat (msec): min=3, max=336, avg=67.69, stdev=49.63 00:24:45.537 lat (msec): min=3, max=336, avg=68.57, stdev=50.27 00:24:45.537 clat percentiles (msec): 00:24:45.537 | 1.00th=[ 7], 5.00th=[ 23], 10.00th=[ 25], 20.00th=[ 27], 00:24:45.537 | 30.00th=[ 31], 40.00th=[ 43], 50.00th=[ 53], 60.00th=[ 65], 00:24:45.537 | 70.00th=[ 81], 80.00th=[ 106], 90.00th=[ 138], 95.00th=[ 180], 00:24:45.537 | 99.00th=[ 218], 99.50th=[ 236], 99.90th=[ 275], 99.95th=[ 288], 00:24:45.537 | 99.99th=[ 338] 00:24:45.537 bw ( KiB/s): min=63488, max=612352, per=12.23%, avg=238464.00, stdev=141376.12, samples=20 00:24:45.537 iops : min= 248, max= 2392, avg=931.50, stdev=552.25, samples=20 00:24:45.537 lat (msec) : 4=0.12%, 10=1.38%, 20=2.41%, 50=44.52%, 100=30.00% 00:24:45.537 lat (msec) : 250=21.20%, 500=0.38% 00:24:45.537 cpu : usr=0.28%, sys=3.23%, ctx=1811, majf=0, minf=4097 00:24:45.537 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:24:45.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:45.537 issued rwts: total=9378,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.537 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:45.537 job4: (groupid=0, jobs=1): err= 0: pid=3526828: Wed Apr 17 10:21:16 2024 00:24:45.537 read: IOPS=647, BW=162MiB/s (170MB/s)(1634MiB/10093msec) 00:24:45.537 slat (usec): min=9, max=151485, avg=1256.49, stdev=5471.19 00:24:45.537 clat (usec): min=1025, max=304872, avg=97396.98, stdev=64049.55 00:24:45.537 lat (usec): min=1062, max=340059, avg=98653.47, stdev=64988.13 00:24:45.537 clat percentiles (msec): 00:24:45.537 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 23], 20.00th=[ 34], 00:24:45.537 | 30.00th=[ 42], 40.00th=[ 58], 50.00th=[ 87], 60.00th=[ 131], 00:24:45.537 | 70.00th=[ 150], 80.00th=[ 165], 90.00th=[ 182], 95.00th=[ 194], 00:24:45.537 | 99.00th=[ 224], 99.50th=[ 257], 99.90th=[ 262], 99.95th=[ 262], 00:24:45.537 | 99.99th=[ 305] 00:24:45.537 bw ( KiB/s): min=87552, max=328704, per=8.50%, avg=165734.40, stdev=81088.68, samples=20 00:24:45.537 iops : min= 342, max= 1284, avg=647.40, stdev=316.75, samples=20 00:24:45.537 lat (msec) : 2=0.06%, 4=0.63%, 10=3.75%, 20=4.10%, 50=26.77% 00:24:45.537 lat (msec) : 100=18.53%, 250=45.59%, 500=0.58% 00:24:45.537 cpu : usr=0.28%, sys=2.39%, ctx=1400, majf=0, minf=4097 00:24:45.537 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:24:45.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:45.537 issued rwts: total=6537,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.537 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:45.537 job5: (groupid=0, jobs=1): err= 0: pid=3526829: Wed Apr 17 10:21:16 2024 00:24:45.537 read: IOPS=691, BW=173MiB/s (181MB/s)(1741MiB/10064msec) 00:24:45.537 slat (usec): min=11, max=85753, avg=989.01, stdev=3999.14 00:24:45.537 clat (usec): min=1605, max=216151, avg=91387.67, stdev=52514.21 00:24:45.537 lat (usec): min=1638, max=268511, avg=92376.68, stdev=53165.18 00:24:45.537 clat percentiles (msec): 00:24:45.537 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 14], 20.00th=[ 38], 00:24:45.537 | 30.00th=[ 50], 40.00th=[ 79], 50.00th=[ 99], 60.00th=[ 113], 00:24:45.537 | 70.00th=[ 127], 80.00th=[ 144], 90.00th=[ 159], 95.00th=[ 167], 00:24:45.537 | 99.00th=[ 192], 99.50th=[ 207], 99.90th=[ 215], 99.95th=[ 215], 00:24:45.537 | 99.99th=[ 218] 00:24:45.537 bw ( KiB/s): min=99840, max=371712, per=9.06%, avg=176678.20, stdev=80316.75, samples=20 00:24:45.537 iops : min= 390, max= 1452, avg=690.10, stdev=313.77, samples=20 00:24:45.537 lat (msec) : 2=0.11%, 4=1.02%, 10=4.06%, 20=7.44%, 50=17.85% 00:24:45.537 lat (msec) : 100=21.37%, 250=48.15% 00:24:45.537 cpu : usr=0.14%, sys=2.58%, ctx=1581, majf=0, minf=4097 00:24:45.537 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:45.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:45.537 issued rwts: total=6964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.537 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:45.537 job6: (groupid=0, jobs=1): err= 0: pid=3526830: Wed Apr 17 10:21:16 2024 00:24:45.537 read: IOPS=415, BW=104MiB/s (109MB/s)(1048MiB/10090msec) 00:24:45.537 slat (usec): min=14, max=132579, avg=2386.40, stdev=6309.34 00:24:45.537 clat (msec): min=63, max=313, avg=151.49, stdev=28.96 00:24:45.537 lat (msec): min=73, max=313, avg=153.87, stdev=29.60 00:24:45.537 clat percentiles (msec): 00:24:45.537 | 1.00th=[ 86], 5.00th=[ 99], 10.00th=[ 112], 20.00th=[ 126], 00:24:45.537 | 30.00th=[ 138], 40.00th=[ 148], 50.00th=[ 155], 60.00th=[ 161], 00:24:45.537 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 188], 95.00th=[ 199], 00:24:45.537 | 99.00th=[ 213], 99.50th=[ 220], 99.90th=[ 226], 99.95th=[ 243], 00:24:45.537 | 99.99th=[ 313] 00:24:45.537 bw ( KiB/s): min=69632, max=166912, per=5.42%, avg=105702.40, stdev=20771.97, samples=20 00:24:45.537 iops : min= 272, max= 652, avg=412.90, stdev=81.14, samples=20 00:24:45.537 lat (msec) : 100=5.72%, 250=94.25%, 500=0.02% 00:24:45.537 cpu : usr=0.19%, sys=1.86%, ctx=900, majf=0, minf=4097 00:24:45.537 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:45.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:45.537 issued rwts: total=4193,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.537 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:45.537 job7: (groupid=0, jobs=1): err= 0: pid=3526831: Wed Apr 17 10:21:16 2024 00:24:45.537 read: IOPS=777, BW=194MiB/s (204MB/s)(1959MiB/10083msec) 00:24:45.537 slat (usec): min=11, max=105487, avg=805.25, stdev=4110.51 00:24:45.537 clat (msec): min=2, max=241, avg=81.46, stdev=48.69 00:24:45.537 lat (msec): min=2, max=266, avg=82.27, stdev=49.26 00:24:45.537 clat percentiles (msec): 00:24:45.537 | 1.00th=[ 7], 5.00th=[ 14], 10.00th=[ 19], 20.00th=[ 33], 00:24:45.537 | 30.00th=[ 48], 40.00th=[ 61], 50.00th=[ 83], 60.00th=[ 97], 00:24:45.537 | 70.00th=[ 109], 80.00th=[ 123], 90.00th=[ 146], 95.00th=[ 165], 00:24:45.537 | 99.00th=[ 213], 99.50th=[ 224], 99.90th=[ 243], 99.95th=[ 243], 00:24:45.537 | 99.99th=[ 243] 00:24:45.538 bw ( KiB/s): min=111104, max=383488, per=10.21%, avg=199014.40, stdev=62761.06, samples=20 00:24:45.538 iops : min= 434, max= 1498, avg=777.40, stdev=245.16, samples=20 00:24:45.538 lat (msec) : 4=0.36%, 10=2.31%, 20=8.46%, 50=20.68%, 100=31.06% 00:24:45.538 lat (msec) : 250=37.13% 00:24:45.538 cpu : usr=0.34%, sys=2.70%, ctx=1741, majf=0, minf=4097 00:24:45.538 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:45.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.538 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:45.538 issued rwts: total=7837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.538 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:45.538 job8: (groupid=0, jobs=1): err= 0: pid=3526832: Wed Apr 17 10:21:16 2024 00:24:45.538 read: IOPS=1230, BW=308MiB/s (323MB/s)(3085MiB/10025msec) 00:24:45.538 slat (usec): min=13, max=89496, avg=732.76, stdev=2623.10 00:24:45.538 clat (msec): min=3, max=195, avg=51.20, stdev=35.40 00:24:45.538 lat (msec): min=3, max=231, avg=51.93, stdev=35.86 00:24:45.538 clat percentiles (msec): 00:24:45.538 | 1.00th=[ 19], 5.00th=[ 23], 10.00th=[ 24], 20.00th=[ 25], 00:24:45.538 | 30.00th=[ 26], 40.00th=[ 27], 50.00th=[ 28], 60.00th=[ 47], 00:24:45.538 | 70.00th=[ 65], 80.00th=[ 88], 90.00th=[ 106], 95.00th=[ 124], 00:24:45.538 | 99.00th=[ 148], 99.50th=[ 157], 99.90th=[ 180], 99.95th=[ 186], 00:24:45.538 | 99.99th=[ 197] 00:24:45.538 bw ( KiB/s): min=117248, max=678400, per=16.12%, avg=314265.60, stdev=194083.98, samples=20 00:24:45.538 iops : min= 458, max= 2650, avg=1227.60, stdev=758.14, samples=20 00:24:45.538 lat (msec) : 4=0.02%, 10=0.12%, 20=1.22%, 50=62.11%, 100=23.96% 00:24:45.538 lat (msec) : 250=12.56% 00:24:45.538 cpu : usr=0.45%, sys=3.93%, ctx=2301, majf=0, minf=4097 00:24:45.538 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:24:45.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.538 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:45.538 issued rwts: total=12339,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.538 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:45.538 job9: (groupid=0, jobs=1): err= 0: pid=3526833: Wed Apr 17 10:21:16 2024 00:24:45.538 read: IOPS=699, BW=175MiB/s (183MB/s)(1767MiB/10096msec) 00:24:45.538 slat (usec): min=10, max=96298, avg=1172.12, stdev=3901.57 00:24:45.538 clat (usec): min=1434, max=250213, avg=90170.63, stdev=34852.64 00:24:45.538 lat (usec): min=1477, max=250240, avg=91342.75, stdev=35356.69 00:24:45.538 clat percentiles (msec): 00:24:45.538 | 1.00th=[ 9], 5.00th=[ 27], 10.00th=[ 39], 20.00th=[ 62], 00:24:45.538 | 30.00th=[ 80], 40.00th=[ 89], 50.00th=[ 95], 60.00th=[ 101], 00:24:45.538 | 70.00th=[ 106], 80.00th=[ 114], 90.00th=[ 130], 95.00th=[ 146], 00:24:45.538 | 99.00th=[ 180], 99.50th=[ 190], 99.90th=[ 218], 99.95th=[ 234], 00:24:45.538 | 99.99th=[ 251] 00:24:45.538 bw ( KiB/s): min=116736, max=391168, per=9.20%, avg=179276.80, stdev=57083.87, samples=20 00:24:45.538 iops : min= 456, max= 1528, avg=700.30, stdev=222.98, samples=20 00:24:45.538 lat (msec) : 2=0.03%, 4=0.45%, 10=0.71%, 20=1.26%, 50=12.57% 00:24:45.538 lat (msec) : 100=45.27%, 250=39.70%, 500=0.01% 00:24:45.538 cpu : usr=0.30%, sys=2.71%, ctx=1480, majf=0, minf=4097 00:24:45.538 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:45.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.538 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:45.538 issued rwts: total=7066,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.538 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:45.538 job10: (groupid=0, jobs=1): err= 0: pid=3526834: Wed Apr 17 10:21:16 2024 00:24:45.538 read: IOPS=649, BW=162MiB/s (170MB/s)(1628MiB/10026msec) 00:24:45.538 slat (usec): min=13, max=140251, avg=1101.38, stdev=5104.79 00:24:45.538 clat (usec): min=1400, max=261899, avg=97260.09, stdev=63276.07 00:24:45.538 lat (usec): min=1446, max=323458, avg=98361.48, stdev=64182.99 00:24:45.538 clat percentiles (msec): 00:24:45.538 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 16], 20.00th=[ 30], 00:24:45.538 | 30.00th=[ 46], 40.00th=[ 66], 50.00th=[ 93], 60.00th=[ 120], 00:24:45.538 | 70.00th=[ 150], 80.00th=[ 165], 90.00th=[ 182], 95.00th=[ 192], 00:24:45.538 | 99.00th=[ 211], 99.50th=[ 213], 99.90th=[ 236], 99.95th=[ 239], 00:24:45.538 | 99.99th=[ 262] 00:24:45.538 bw ( KiB/s): min=88576, max=367616, per=8.47%, avg=165120.00, stdev=78321.38, samples=20 00:24:45.538 iops : min= 346, max= 1436, avg=645.00, stdev=305.94, samples=20 00:24:45.538 lat (msec) : 2=0.09%, 4=1.37%, 10=3.79%, 20=8.18%, 50=19.01% 00:24:45.538 lat (msec) : 100=21.39%, 250=46.14%, 500=0.03% 00:24:45.538 cpu : usr=0.23%, sys=2.46%, ctx=1508, majf=0, minf=4097 00:24:45.538 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:24:45.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.538 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:45.538 issued rwts: total=6513,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.538 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:45.538 00:24:45.538 Run status group 0 (all jobs): 00:24:45.538 READ: bw=1904MiB/s (1996MB/s), 104MiB/s-308MiB/s (109MB/s-323MB/s), io=18.8GiB (20.2GB), run=10025-10104msec 00:24:45.538 00:24:45.538 Disk stats (read/write): 00:24:45.538 nvme0n1: ios=11566/0, merge=0/0, ticks=1238352/0, in_queue=1238352, util=97.38% 00:24:45.538 nvme10n1: ios=9200/0, merge=0/0, ticks=1237506/0, in_queue=1237506, util=97.55% 00:24:45.538 nvme1n1: ios=10935/0, merge=0/0, ticks=1230378/0, in_queue=1230378, util=97.89% 00:24:45.538 nvme2n1: ios=18568/0, merge=0/0, ticks=1236349/0, in_queue=1236349, util=97.97% 00:24:45.538 nvme3n1: ios=12934/0, merge=0/0, ticks=1233241/0, in_queue=1233241, util=98.05% 00:24:45.538 nvme4n1: ios=13658/0, merge=0/0, ticks=1238373/0, in_queue=1238373, util=98.34% 00:24:45.538 nvme5n1: ios=8196/0, merge=0/0, ticks=1224228/0, in_queue=1224228, util=98.45% 00:24:45.538 nvme6n1: ios=15391/0, merge=0/0, ticks=1242969/0, in_queue=1242969, util=98.55% 00:24:45.538 nvme7n1: ios=24499/0, merge=0/0, ticks=1241252/0, in_queue=1241252, util=98.98% 00:24:45.538 nvme8n1: ios=14001/0, merge=0/0, ticks=1238078/0, in_queue=1238078, util=99.09% 00:24:45.538 nvme9n1: ios=12792/0, merge=0/0, ticks=1237415/0, in_queue=1237415, util=99.24% 00:24:45.538 10:21:17 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:45.538 [global] 00:24:45.538 thread=1 00:24:45.538 invalidate=1 00:24:45.538 rw=randwrite 00:24:45.538 time_based=1 00:24:45.538 runtime=10 00:24:45.538 ioengine=libaio 00:24:45.538 direct=1 00:24:45.538 bs=262144 00:24:45.538 iodepth=64 00:24:45.538 norandommap=1 00:24:45.538 numjobs=1 00:24:45.538 00:24:45.538 [job0] 00:24:45.538 filename=/dev/nvme0n1 00:24:45.538 [job1] 00:24:45.538 filename=/dev/nvme10n1 00:24:45.538 [job2] 00:24:45.538 filename=/dev/nvme1n1 00:24:45.538 [job3] 00:24:45.538 filename=/dev/nvme2n1 00:24:45.538 [job4] 00:24:45.538 filename=/dev/nvme3n1 00:24:45.538 [job5] 00:24:45.538 filename=/dev/nvme4n1 00:24:45.538 [job6] 00:24:45.538 filename=/dev/nvme5n1 00:24:45.538 [job7] 00:24:45.538 filename=/dev/nvme6n1 00:24:45.538 [job8] 00:24:45.538 filename=/dev/nvme7n1 00:24:45.538 [job9] 00:24:45.538 filename=/dev/nvme8n1 00:24:45.538 [job10] 00:24:45.538 filename=/dev/nvme9n1 00:24:45.538 Could not set queue depth (nvme0n1) 00:24:45.538 Could not set queue depth (nvme10n1) 00:24:45.538 Could not set queue depth (nvme1n1) 00:24:45.538 Could not set queue depth (nvme2n1) 00:24:45.538 Could not set queue depth (nvme3n1) 00:24:45.538 Could not set queue depth (nvme4n1) 00:24:45.538 Could not set queue depth (nvme5n1) 00:24:45.538 Could not set queue depth (nvme6n1) 00:24:45.538 Could not set queue depth (nvme7n1) 00:24:45.538 Could not set queue depth (nvme8n1) 00:24:45.538 Could not set queue depth (nvme9n1) 00:24:45.538 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:45.538 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:45.538 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:45.538 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:45.538 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:45.538 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:45.538 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:45.538 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:45.538 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:45.538 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:45.538 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:45.538 fio-3.35 00:24:45.538 Starting 11 threads 00:24:55.515 00:24:55.515 job0: (groupid=0, jobs=1): err= 0: pid=3528704: Wed Apr 17 10:21:28 2024 00:24:55.515 write: IOPS=296, BW=74.2MiB/s (77.8MB/s)(756MiB/10188msec); 0 zone resets 00:24:55.515 slat (usec): min=26, max=43062, avg=3304.40, stdev=6099.21 00:24:55.515 clat (msec): min=28, max=370, avg=212.23, stdev=36.67 00:24:55.515 lat (msec): min=28, max=370, avg=215.53, stdev=36.69 00:24:55.515 clat percentiles (msec): 00:24:55.515 | 1.00th=[ 89], 5.00th=[ 155], 10.00th=[ 165], 20.00th=[ 182], 00:24:55.515 | 30.00th=[ 201], 40.00th=[ 209], 50.00th=[ 218], 60.00th=[ 224], 00:24:55.515 | 70.00th=[ 234], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 257], 00:24:55.515 | 99.00th=[ 271], 99.50th=[ 321], 99.90th=[ 359], 99.95th=[ 372], 00:24:55.515 | 99.99th=[ 372] 00:24:55.515 bw ( KiB/s): min=65536, max=98304, per=5.88%, avg=75776.00, stdev=9158.93, samples=20 00:24:55.515 iops : min= 256, max= 384, avg=296.00, stdev=35.78, samples=20 00:24:55.515 lat (msec) : 50=0.40%, 100=0.79%, 250=87.57%, 500=11.24% 00:24:55.515 cpu : usr=0.84%, sys=0.95%, ctx=803, majf=0, minf=1 00:24:55.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:24:55.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:55.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:55.515 issued rwts: total=0,3024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:55.515 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:55.515 job1: (groupid=0, jobs=1): err= 0: pid=3528709: Wed Apr 17 10:21:28 2024 00:24:55.515 write: IOPS=751, BW=188MiB/s (197MB/s)(1902MiB/10124msec); 0 zone resets 00:24:55.515 slat (usec): min=23, max=17458, avg=1144.27, stdev=2283.44 00:24:55.515 clat (usec): min=1987, max=280909, avg=83991.72, stdev=35660.13 00:24:55.515 lat (msec): min=2, max=280, avg=85.14, stdev=35.88 00:24:55.515 clat percentiles (msec): 00:24:55.515 | 1.00th=[ 11], 5.00th=[ 50], 10.00th=[ 53], 20.00th=[ 58], 00:24:55.515 | 30.00th=[ 65], 40.00th=[ 66], 50.00th=[ 73], 60.00th=[ 87], 00:24:55.515 | 70.00th=[ 100], 80.00th=[ 110], 90.00th=[ 130], 95.00th=[ 144], 00:24:55.515 | 99.00th=[ 194], 99.50th=[ 245], 99.90th=[ 271], 99.95th=[ 275], 00:24:55.515 | 99.99th=[ 279] 00:24:55.515 bw ( KiB/s): min=120320, max=301056, per=15.00%, avg=193152.00, stdev=54137.06, samples=20 00:24:55.515 iops : min= 470, max= 1176, avg=754.50, stdev=211.47, samples=20 00:24:55.515 lat (msec) : 2=0.01%, 4=0.21%, 10=0.71%, 20=1.54%, 50=3.25% 00:24:55.515 lat (msec) : 100=65.27%, 250=28.58%, 500=0.43% 00:24:55.515 cpu : usr=1.77%, sys=2.08%, ctx=2652, majf=0, minf=1 00:24:55.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:55.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:55.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:55.515 issued rwts: total=0,7608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:55.515 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:55.515 job2: (groupid=0, jobs=1): err= 0: pid=3528726: Wed Apr 17 10:21:28 2024 00:24:55.515 write: IOPS=374, BW=93.7MiB/s (98.3MB/s)(954MiB/10178msec); 0 zone resets 00:24:55.515 slat (usec): min=25, max=119434, avg=2095.34, stdev=6555.96 00:24:55.515 clat (msec): min=2, max=418, avg=168.47, stdev=83.41 00:24:55.515 lat (msec): min=2, max=418, avg=170.57, stdev=84.44 00:24:55.515 clat percentiles (msec): 00:24:55.515 | 1.00th=[ 11], 5.00th=[ 24], 10.00th=[ 47], 20.00th=[ 95], 00:24:55.515 | 30.00th=[ 122], 40.00th=[ 131], 50.00th=[ 180], 60.00th=[ 211], 00:24:55.515 | 70.00th=[ 228], 80.00th=[ 245], 90.00th=[ 266], 95.00th=[ 288], 00:24:55.515 | 99.00th=[ 359], 99.50th=[ 388], 99.90th=[ 405], 99.95th=[ 418], 00:24:55.515 | 99.99th=[ 418] 00:24:55.516 bw ( KiB/s): min=49664, max=177664, per=7.46%, avg=96076.80, stdev=38444.09, samples=20 00:24:55.516 iops : min= 194, max= 694, avg=375.30, stdev=150.17, samples=20 00:24:55.516 lat (msec) : 4=0.13%, 10=0.84%, 20=3.25%, 50=6.08%, 100=14.28% 00:24:55.516 lat (msec) : 250=59.30%, 500=16.12% 00:24:55.516 cpu : usr=0.68%, sys=1.33%, ctx=1885, majf=0, minf=1 00:24:55.516 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:24:55.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:55.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:55.516 issued rwts: total=0,3816,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:55.516 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:55.516 job3: (groupid=0, jobs=1): err= 0: pid=3528748: Wed Apr 17 10:21:28 2024 00:24:55.516 write: IOPS=563, BW=141MiB/s (148MB/s)(1418MiB/10067msec); 0 zone resets 00:24:55.516 slat (usec): min=24, max=50738, avg=1515.49, stdev=3555.29 00:24:55.516 clat (usec): min=1168, max=307097, avg=112017.15, stdev=64887.85 00:24:55.516 lat (usec): min=1225, max=307133, avg=113532.64, stdev=65731.16 00:24:55.516 clat percentiles (msec): 00:24:55.516 | 1.00th=[ 6], 5.00th=[ 20], 10.00th=[ 33], 20.00th=[ 46], 00:24:55.516 | 30.00th=[ 78], 40.00th=[ 96], 50.00th=[ 104], 60.00th=[ 122], 00:24:55.516 | 70.00th=[ 130], 80.00th=[ 159], 90.00th=[ 224], 95.00th=[ 234], 00:24:55.516 | 99.00th=[ 253], 99.50th=[ 268], 99.90th=[ 300], 99.95th=[ 305], 00:24:55.516 | 99.99th=[ 309] 00:24:55.516 bw ( KiB/s): min=69632, max=358912, per=11.15%, avg=143600.40, stdev=71748.25, samples=20 00:24:55.516 iops : min= 272, max= 1402, avg=560.90, stdev=280.26, samples=20 00:24:55.516 lat (msec) : 2=0.21%, 4=0.48%, 10=1.67%, 20=2.89%, 50=16.50% 00:24:55.516 lat (msec) : 100=24.17%, 250=52.74%, 500=1.34% 00:24:55.516 cpu : usr=1.46%, sys=1.53%, ctx=2457, majf=0, minf=1 00:24:55.516 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:55.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:55.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:55.516 issued rwts: total=0,5673,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:55.516 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:55.516 job4: (groupid=0, jobs=1): err= 0: pid=3528756: Wed Apr 17 10:21:28 2024 00:24:55.516 write: IOPS=509, BW=127MiB/s (134MB/s)(1282MiB/10065msec); 0 zone resets 00:24:55.516 slat (usec): min=20, max=56319, avg=1690.84, stdev=3682.59 00:24:55.516 clat (usec): min=1236, max=244088, avg=123918.21, stdev=51994.14 00:24:55.516 lat (usec): min=1293, max=244128, avg=125609.05, stdev=52749.56 00:24:55.516 clat percentiles (msec): 00:24:55.516 | 1.00th=[ 6], 5.00th=[ 24], 10.00th=[ 39], 20.00th=[ 73], 00:24:55.516 | 30.00th=[ 122], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 138], 00:24:55.516 | 70.00th=[ 155], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 194], 00:24:55.516 | 99.00th=[ 224], 99.50th=[ 230], 99.90th=[ 243], 99.95th=[ 245], 00:24:55.516 | 99.99th=[ 245] 00:24:55.516 bw ( KiB/s): min=73728, max=352768, per=10.06%, avg=129638.40, stdev=58408.08, samples=20 00:24:55.516 iops : min= 288, max= 1378, avg=506.40, stdev=228.16, samples=20 00:24:55.516 lat (msec) : 2=0.23%, 4=0.49%, 10=1.17%, 20=2.36%, 50=11.55% 00:24:55.516 lat (msec) : 100=9.73%, 250=74.47% 00:24:55.516 cpu : usr=1.05%, sys=1.37%, ctx=2194, majf=0, minf=1 00:24:55.516 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:55.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:55.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:55.516 issued rwts: total=0,5127,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:55.516 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:55.516 job5: (groupid=0, jobs=1): err= 0: pid=3528780: Wed Apr 17 10:21:28 2024 00:24:55.516 write: IOPS=504, BW=126MiB/s (132MB/s)(1277MiB/10132msec); 0 zone resets 00:24:55.516 slat (usec): min=21, max=15050, avg=1802.86, stdev=3478.91 00:24:55.516 clat (msec): min=2, max=286, avg=125.09, stdev=41.77 00:24:55.516 lat (msec): min=2, max=286, avg=126.89, stdev=42.23 00:24:55.516 clat percentiles (msec): 00:24:55.516 | 1.00th=[ 7], 5.00th=[ 43], 10.00th=[ 90], 20.00th=[ 96], 00:24:55.516 | 30.00th=[ 102], 40.00th=[ 110], 50.00th=[ 131], 60.00th=[ 138], 00:24:55.516 | 70.00th=[ 146], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 178], 00:24:55.516 | 99.00th=[ 207], 99.50th=[ 230], 99.90th=[ 275], 99.95th=[ 275], 00:24:55.516 | 99.99th=[ 288] 00:24:55.516 bw ( KiB/s): min=94208, max=184832, per=10.03%, avg=129152.00, stdev=28468.63, samples=20 00:24:55.516 iops : min= 368, max= 722, avg=504.50, stdev=111.21, samples=20 00:24:55.516 lat (msec) : 4=0.20%, 10=1.96%, 20=1.17%, 50=2.58%, 100=19.44% 00:24:55.516 lat (msec) : 250=74.37%, 500=0.27% 00:24:55.516 cpu : usr=1.19%, sys=1.42%, ctx=1800, majf=0, minf=1 00:24:55.516 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:55.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:55.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:55.516 issued rwts: total=0,5108,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:55.516 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:55.516 job6: (groupid=0, jobs=1): err= 0: pid=3528791: Wed Apr 17 10:21:28 2024 00:24:55.516 write: IOPS=356, BW=89.1MiB/s (93.4MB/s)(900MiB/10103msec); 0 zone resets 00:24:55.516 slat (usec): min=25, max=77391, avg=2560.79, stdev=5444.58 00:24:55.516 clat (usec): min=1874, max=351575, avg=177037.73, stdev=66685.44 00:24:55.516 lat (msec): min=2, max=351, avg=179.60, stdev=67.62 00:24:55.516 clat percentiles (msec): 00:24:55.516 | 1.00th=[ 7], 5.00th=[ 41], 10.00th=[ 77], 20.00th=[ 103], 00:24:55.516 | 30.00th=[ 157], 40.00th=[ 180], 50.00th=[ 203], 60.00th=[ 213], 00:24:55.516 | 70.00th=[ 222], 80.00th=[ 232], 90.00th=[ 243], 95.00th=[ 251], 00:24:55.516 | 99.00th=[ 275], 99.50th=[ 300], 99.90th=[ 342], 99.95th=[ 342], 00:24:55.516 | 99.99th=[ 351] 00:24:55.516 bw ( KiB/s): min=61440, max=167936, per=7.03%, avg=90521.60, stdev=29087.44, samples=20 00:24:55.516 iops : min= 240, max= 656, avg=353.60, stdev=113.62, samples=20 00:24:55.516 lat (msec) : 2=0.03%, 4=0.44%, 10=0.69%, 20=1.00%, 50=4.14% 00:24:55.516 lat (msec) : 100=11.03%, 250=77.41%, 500=5.25% 00:24:55.516 cpu : usr=0.75%, sys=1.07%, ctx=1345, majf=0, minf=1 00:24:55.516 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:24:55.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:55.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:55.516 issued rwts: total=0,3599,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:55.516 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:55.516 job7: (groupid=0, jobs=1): err= 0: pid=3528801: Wed Apr 17 10:21:28 2024 00:24:55.516 write: IOPS=415, BW=104MiB/s (109MB/s)(1057MiB/10184msec); 0 zone resets 00:24:55.516 slat (usec): min=19, max=57712, avg=1925.77, stdev=4472.23 00:24:55.516 clat (usec): min=1841, max=358719, avg=152131.15, stdev=68773.86 00:24:55.516 lat (usec): min=1886, max=358783, avg=154056.92, stdev=69695.47 00:24:55.516 clat percentiles (msec): 00:24:55.516 | 1.00th=[ 9], 5.00th=[ 27], 10.00th=[ 44], 20.00th=[ 89], 00:24:55.516 | 30.00th=[ 122], 40.00th=[ 155], 50.00th=[ 171], 60.00th=[ 176], 00:24:55.516 | 70.00th=[ 199], 80.00th=[ 211], 90.00th=[ 228], 95.00th=[ 239], 00:24:55.516 | 99.00th=[ 288], 99.50th=[ 330], 99.90th=[ 355], 99.95th=[ 355], 00:24:55.516 | 99.99th=[ 359] 00:24:55.516 bw ( KiB/s): min=63488, max=230400, per=8.28%, avg=106624.00, stdev=40391.46, samples=20 00:24:55.516 iops : min= 248, max= 900, avg=416.50, stdev=157.78, samples=20 00:24:55.516 lat (msec) : 2=0.02%, 4=0.43%, 10=0.69%, 20=1.96%, 50=8.54% 00:24:55.516 lat (msec) : 100=15.25%, 250=70.28%, 500=2.84% 00:24:55.516 cpu : usr=0.85%, sys=1.26%, ctx=2039, majf=0, minf=1 00:24:55.516 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:55.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:55.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:55.516 issued rwts: total=0,4229,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:55.516 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:55.516 job8: (groupid=0, jobs=1): err= 0: pid=3528825: Wed Apr 17 10:21:28 2024 00:24:55.516 write: IOPS=442, BW=111MiB/s (116MB/s)(1111MiB/10048msec); 0 zone resets 00:24:55.516 slat (usec): min=21, max=168604, avg=1818.90, stdev=5417.31 00:24:55.516 clat (usec): min=1273, max=421318, avg=142824.44, stdev=91780.77 00:24:55.516 lat (usec): min=1308, max=421357, avg=144643.34, stdev=93006.29 00:24:55.516 clat percentiles (msec): 00:24:55.516 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 23], 20.00th=[ 50], 00:24:55.516 | 30.00th=[ 55], 40.00th=[ 107], 50.00th=[ 169], 60.00th=[ 190], 00:24:55.516 | 70.00th=[ 211], 80.00th=[ 234], 90.00th=[ 251], 95.00th=[ 262], 00:24:55.516 | 99.00th=[ 355], 99.50th=[ 368], 99.90th=[ 397], 99.95th=[ 401], 00:24:55.516 | 99.99th=[ 422] 00:24:55.516 bw ( KiB/s): min=61440, max=303104, per=8.71%, avg=112179.20, stdev=69982.01, samples=20 00:24:55.516 iops : min= 240, max= 1184, avg=438.20, stdev=273.37, samples=20 00:24:55.516 lat (msec) : 2=0.67%, 4=0.67%, 10=3.94%, 20=3.82%, 50=10.98% 00:24:55.516 lat (msec) : 100=19.30%, 250=50.75%, 500=9.85% 00:24:55.516 cpu : usr=0.98%, sys=1.25%, ctx=2317, majf=0, minf=1 00:24:55.516 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:55.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:55.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:55.516 issued rwts: total=0,4445,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:55.516 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:55.516 job9: (groupid=0, jobs=1): err= 0: pid=3528836: Wed Apr 17 10:21:28 2024 00:24:55.516 write: IOPS=372, BW=93.2MiB/s (97.7MB/s)(949MiB/10184msec); 0 zone resets 00:24:55.516 slat (usec): min=27, max=85232, avg=2370.94, stdev=5607.60 00:24:55.516 clat (msec): min=2, max=371, avg=169.21, stdev=79.12 00:24:55.516 lat (msec): min=2, max=371, avg=171.58, stdev=80.16 00:24:55.516 clat percentiles (msec): 00:24:55.516 | 1.00th=[ 18], 5.00th=[ 38], 10.00th=[ 52], 20.00th=[ 72], 00:24:55.516 | 30.00th=[ 102], 40.00th=[ 178], 50.00th=[ 199], 60.00th=[ 215], 00:24:55.516 | 70.00th=[ 228], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 264], 00:24:55.516 | 99.00th=[ 288], 99.50th=[ 313], 99.90th=[ 363], 99.95th=[ 372], 00:24:55.516 | 99.99th=[ 372] 00:24:55.516 bw ( KiB/s): min=63488, max=274944, per=7.42%, avg=95564.80, stdev=54415.51, samples=20 00:24:55.516 iops : min= 248, max= 1074, avg=373.30, stdev=212.56, samples=20 00:24:55.516 lat (msec) : 4=0.16%, 10=0.11%, 20=1.13%, 50=8.06%, 100=19.59% 00:24:55.516 lat (msec) : 250=60.05%, 500=10.90% 00:24:55.517 cpu : usr=0.81%, sys=1.07%, ctx=1481, majf=0, minf=1 00:24:55.517 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:24:55.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:55.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:55.517 issued rwts: total=0,3797,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:55.517 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:55.517 job10: (groupid=0, jobs=1): err= 0: pid=3528842: Wed Apr 17 10:21:28 2024 00:24:55.517 write: IOPS=477, BW=119MiB/s (125MB/s)(1208MiB/10128msec); 0 zone resets 00:24:55.517 slat (usec): min=39, max=32825, avg=2038.45, stdev=3721.20 00:24:55.517 clat (msec): min=10, max=286, avg=131.92, stdev=35.64 00:24:55.517 lat (msec): min=11, max=286, avg=133.95, stdev=36.02 00:24:55.517 clat percentiles (msec): 00:24:55.517 | 1.00th=[ 37], 5.00th=[ 90], 10.00th=[ 95], 20.00th=[ 97], 00:24:55.517 | 30.00th=[ 103], 40.00th=[ 125], 50.00th=[ 136], 60.00th=[ 140], 00:24:55.517 | 70.00th=[ 155], 80.00th=[ 171], 90.00th=[ 176], 95.00th=[ 178], 00:24:55.517 | 99.00th=[ 209], 99.50th=[ 230], 99.90th=[ 275], 99.95th=[ 275], 00:24:55.517 | 99.99th=[ 288] 00:24:55.517 bw ( KiB/s): min=92160, max=169984, per=9.48%, avg=122112.00, stdev=26857.47, samples=20 00:24:55.517 iops : min= 360, max= 664, avg=477.00, stdev=104.91, samples=20 00:24:55.517 lat (msec) : 20=0.21%, 50=1.39%, 100=21.52%, 250=76.60%, 500=0.29% 00:24:55.517 cpu : usr=1.40%, sys=1.17%, ctx=1359, majf=0, minf=1 00:24:55.517 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:55.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:55.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:55.517 issued rwts: total=0,4833,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:55.517 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:55.517 00:24:55.517 Run status group 0 (all jobs): 00:24:55.517 WRITE: bw=1258MiB/s (1319MB/s), 74.2MiB/s-188MiB/s (77.8MB/s-197MB/s), io=12.5GiB (13.4GB), run=10048-10188msec 00:24:55.517 00:24:55.517 Disk stats (read/write): 00:24:55.517 nvme0n1: ios=50/6014, merge=0/0, ticks=387/1229261, in_queue=1229648, util=98.49% 00:24:55.517 nvme10n1: ios=50/15015, merge=0/0, ticks=67/1208445, in_queue=1208512, util=97.59% 00:24:55.517 nvme1n1: ios=47/7611, merge=0/0, ticks=3442/1231273, in_queue=1234715, util=99.81% 00:24:55.517 nvme2n1: ios=40/11049, merge=0/0, ticks=262/1213475, in_queue=1213737, util=98.22% 00:24:55.517 nvme3n1: ios=28/9952, merge=0/0, ticks=39/1212857, in_queue=1212896, util=97.78% 00:24:55.517 nvme4n1: ios=0/10019, merge=0/0, ticks=0/1206085, in_queue=1206085, util=98.04% 00:24:55.517 nvme5n1: ios=0/6925, merge=0/0, ticks=0/1195978, in_queue=1195978, util=98.23% 00:24:55.517 nvme6n1: ios=0/8431, merge=0/0, ticks=0/1239070, in_queue=1239070, util=98.41% 00:24:55.517 nvme7n1: ios=0/8494, merge=0/0, ticks=0/1211236, in_queue=1211236, util=98.74% 00:24:55.517 nvme8n1: ios=0/7567, merge=0/0, ticks=0/1234936, in_queue=1234936, util=98.96% 00:24:55.517 nvme9n1: ios=43/9469, merge=0/0, ticks=1322/1201323, in_queue=1202645, util=99.96% 00:24:55.517 10:21:28 -- target/multiconnection.sh@36 -- # sync 00:24:55.517 10:21:28 -- target/multiconnection.sh@37 -- # seq 1 11 00:24:55.517 10:21:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.517 10:21:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:55.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:55.517 10:21:28 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:24:55.517 10:21:28 -- common/autotest_common.sh@1198 -- # local i=0 00:24:55.517 10:21:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:55.517 10:21:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:24:55.517 10:21:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:55.517 10:21:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:24:55.517 10:21:28 -- common/autotest_common.sh@1210 -- # return 0 00:24:55.517 10:21:28 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:55.517 10:21:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:55.517 10:21:28 -- common/autotest_common.sh@10 -- # set +x 00:24:55.517 10:21:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:55.517 10:21:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.517 10:21:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:24:55.775 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:24:55.775 10:21:28 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:24:55.775 10:21:28 -- common/autotest_common.sh@1198 -- # local i=0 00:24:55.775 10:21:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:55.775 10:21:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:24:55.775 10:21:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:55.775 10:21:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:24:55.775 10:21:28 -- common/autotest_common.sh@1210 -- # return 0 00:24:55.775 10:21:28 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:55.775 10:21:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:55.775 10:21:28 -- common/autotest_common.sh@10 -- # set +x 00:24:55.775 10:21:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:55.775 10:21:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.775 10:21:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:24:56.034 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:24:56.034 10:21:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:24:56.034 10:21:29 -- common/autotest_common.sh@1198 -- # local i=0 00:24:56.034 10:21:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:56.034 10:21:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:24:56.034 10:21:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:56.034 10:21:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:24:56.034 10:21:29 -- common/autotest_common.sh@1210 -- # return 0 00:24:56.034 10:21:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:56.034 10:21:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:56.034 10:21:29 -- common/autotest_common.sh@10 -- # set +x 00:24:56.034 10:21:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:56.034 10:21:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:56.034 10:21:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:24:56.601 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:24:56.601 10:21:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:24:56.601 10:21:29 -- common/autotest_common.sh@1198 -- # local i=0 00:24:56.601 10:21:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:56.601 10:21:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:24:56.601 10:21:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:24:56.601 10:21:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:56.601 10:21:29 -- common/autotest_common.sh@1210 -- # return 0 00:24:56.601 10:21:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:24:56.601 10:21:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:56.601 10:21:29 -- common/autotest_common.sh@10 -- # set +x 00:24:56.601 10:21:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:56.601 10:21:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:56.601 10:21:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:24:56.860 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:24:56.860 10:21:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:24:56.860 10:21:29 -- common/autotest_common.sh@1198 -- # local i=0 00:24:56.860 10:21:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:56.860 10:21:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:24:56.860 10:21:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:56.860 10:21:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:24:56.860 10:21:29 -- common/autotest_common.sh@1210 -- # return 0 00:24:56.860 10:21:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:24:56.860 10:21:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:56.860 10:21:29 -- common/autotest_common.sh@10 -- # set +x 00:24:56.860 10:21:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:56.860 10:21:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:56.860 10:21:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:24:57.118 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:24:57.118 10:21:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:24:57.118 10:21:30 -- common/autotest_common.sh@1198 -- # local i=0 00:24:57.118 10:21:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:57.118 10:21:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:24:57.118 10:21:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:57.118 10:21:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:24:57.118 10:21:30 -- common/autotest_common.sh@1210 -- # return 0 00:24:57.118 10:21:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:24:57.118 10:21:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:57.118 10:21:30 -- common/autotest_common.sh@10 -- # set +x 00:24:57.118 10:21:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:57.118 10:21:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.118 10:21:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:24:57.118 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:24:57.118 10:21:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:24:57.118 10:21:30 -- common/autotest_common.sh@1198 -- # local i=0 00:24:57.118 10:21:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:57.118 10:21:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:24:57.376 10:21:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:57.376 10:21:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:24:57.376 10:21:30 -- common/autotest_common.sh@1210 -- # return 0 00:24:57.376 10:21:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:24:57.376 10:21:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:57.376 10:21:30 -- common/autotest_common.sh@10 -- # set +x 00:24:57.376 10:21:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:57.376 10:21:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.376 10:21:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:24:57.376 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:24:57.376 10:21:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:24:57.376 10:21:30 -- common/autotest_common.sh@1198 -- # local i=0 00:24:57.376 10:21:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:57.376 10:21:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:24:57.376 10:21:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:57.376 10:21:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:24:57.376 10:21:30 -- common/autotest_common.sh@1210 -- # return 0 00:24:57.376 10:21:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:24:57.376 10:21:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:57.376 10:21:30 -- common/autotest_common.sh@10 -- # set +x 00:24:57.376 10:21:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:57.376 10:21:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.376 10:21:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:24:57.376 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:24:57.376 10:21:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:24:57.376 10:21:30 -- common/autotest_common.sh@1198 -- # local i=0 00:24:57.376 10:21:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:24:57.376 10:21:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:57.635 10:21:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:57.635 10:21:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:24:57.635 10:21:30 -- common/autotest_common.sh@1210 -- # return 0 00:24:57.636 10:21:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:24:57.636 10:21:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:57.636 10:21:30 -- common/autotest_common.sh@10 -- # set +x 00:24:57.636 10:21:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:57.636 10:21:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.636 10:21:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:24:57.636 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:24:57.636 10:21:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:24:57.636 10:21:30 -- common/autotest_common.sh@1198 -- # local i=0 00:24:57.636 10:21:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:57.636 10:21:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:24:57.636 10:21:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:57.636 10:21:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:24:57.636 10:21:30 -- common/autotest_common.sh@1210 -- # return 0 00:24:57.636 10:21:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:24:57.636 10:21:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:57.636 10:21:30 -- common/autotest_common.sh@10 -- # set +x 00:24:57.636 10:21:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:57.636 10:21:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.636 10:21:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:24:57.895 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:24:57.895 10:21:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:24:57.895 10:21:31 -- common/autotest_common.sh@1198 -- # local i=0 00:24:57.895 10:21:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:57.895 10:21:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:24:57.895 10:21:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:57.895 10:21:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:24:57.895 10:21:31 -- common/autotest_common.sh@1210 -- # return 0 00:24:57.895 10:21:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:24:57.895 10:21:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:57.895 10:21:31 -- common/autotest_common.sh@10 -- # set +x 00:24:57.895 10:21:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:57.895 10:21:31 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:24:57.895 10:21:31 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:57.895 10:21:31 -- target/multiconnection.sh@47 -- # nvmftestfini 00:24:57.895 10:21:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:57.895 10:21:31 -- nvmf/common.sh@116 -- # sync 00:24:57.895 10:21:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:57.895 10:21:31 -- nvmf/common.sh@119 -- # set +e 00:24:57.895 10:21:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:57.895 10:21:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:57.895 rmmod nvme_tcp 00:24:57.895 rmmod nvme_fabrics 00:24:57.895 rmmod nvme_keyring 00:24:58.154 10:21:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:58.154 10:21:31 -- nvmf/common.sh@123 -- # set -e 00:24:58.154 10:21:31 -- nvmf/common.sh@124 -- # return 0 00:24:58.154 10:21:31 -- nvmf/common.sh@477 -- # '[' -n 3519045 ']' 00:24:58.154 10:21:31 -- nvmf/common.sh@478 -- # killprocess 3519045 00:24:58.154 10:21:31 -- common/autotest_common.sh@926 -- # '[' -z 3519045 ']' 00:24:58.154 10:21:31 -- common/autotest_common.sh@930 -- # kill -0 3519045 00:24:58.154 10:21:31 -- common/autotest_common.sh@931 -- # uname 00:24:58.154 10:21:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:58.154 10:21:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3519045 00:24:58.154 10:21:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:58.154 10:21:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:58.154 10:21:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3519045' 00:24:58.154 killing process with pid 3519045 00:24:58.154 10:21:31 -- common/autotest_common.sh@945 -- # kill 3519045 00:24:58.154 10:21:31 -- common/autotest_common.sh@950 -- # wait 3519045 00:24:58.722 10:21:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:58.722 10:21:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:58.722 10:21:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:58.722 10:21:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:58.722 10:21:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:58.722 10:21:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.722 10:21:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:58.722 10:21:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.676 10:21:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:00.676 00:25:00.676 real 1m13.601s 00:25:00.676 user 4m37.329s 00:25:00.676 sys 0m21.936s 00:25:00.676 10:21:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:00.676 10:21:33 -- common/autotest_common.sh@10 -- # set +x 00:25:00.676 ************************************ 00:25:00.676 END TEST nvmf_multiconnection 00:25:00.676 ************************************ 00:25:00.676 10:21:33 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:00.676 10:21:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:00.676 10:21:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:00.676 10:21:33 -- common/autotest_common.sh@10 -- # set +x 00:25:00.676 ************************************ 00:25:00.676 START TEST nvmf_initiator_timeout 00:25:00.676 ************************************ 00:25:00.676 10:21:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:00.676 * Looking for test storage... 00:25:00.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:00.676 10:21:33 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:00.676 10:21:33 -- nvmf/common.sh@7 -- # uname -s 00:25:00.676 10:21:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:00.676 10:21:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:00.676 10:21:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:00.676 10:21:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:00.676 10:21:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:00.676 10:21:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:00.676 10:21:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:00.676 10:21:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:00.676 10:21:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:00.676 10:21:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:00.676 10:21:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:00.676 10:21:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:00.677 10:21:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:00.677 10:21:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:00.677 10:21:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:00.677 10:21:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:00.677 10:21:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:00.677 10:21:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.677 10:21:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.677 10:21:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.677 10:21:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.677 10:21:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.677 10:21:33 -- paths/export.sh@5 -- # export PATH 00:25:00.677 10:21:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.677 10:21:33 -- nvmf/common.sh@46 -- # : 0 00:25:00.677 10:21:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:00.677 10:21:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:00.677 10:21:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:00.677 10:21:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:00.677 10:21:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:00.677 10:21:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:00.677 10:21:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:00.677 10:21:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:00.677 10:21:33 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:00.677 10:21:33 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:00.677 10:21:33 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:00.677 10:21:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:00.677 10:21:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:00.677 10:21:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:00.677 10:21:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:00.677 10:21:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:00.677 10:21:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.677 10:21:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:00.677 10:21:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.677 10:21:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:00.677 10:21:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:00.677 10:21:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:00.677 10:21:33 -- common/autotest_common.sh@10 -- # set +x 00:25:07.244 10:21:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:07.244 10:21:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:07.244 10:21:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:07.244 10:21:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:07.244 10:21:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:07.244 10:21:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:07.244 10:21:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:07.244 10:21:39 -- nvmf/common.sh@294 -- # net_devs=() 00:25:07.244 10:21:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:07.244 10:21:39 -- nvmf/common.sh@295 -- # e810=() 00:25:07.244 10:21:39 -- nvmf/common.sh@295 -- # local -ga e810 00:25:07.244 10:21:39 -- nvmf/common.sh@296 -- # x722=() 00:25:07.244 10:21:39 -- nvmf/common.sh@296 -- # local -ga x722 00:25:07.244 10:21:39 -- nvmf/common.sh@297 -- # mlx=() 00:25:07.244 10:21:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:07.244 10:21:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:07.244 10:21:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:07.244 10:21:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:07.245 10:21:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:07.245 10:21:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:07.245 10:21:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:07.245 10:21:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:07.245 10:21:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:07.245 10:21:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:07.245 10:21:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:07.245 10:21:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:07.245 10:21:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:07.245 10:21:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:07.245 10:21:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:07.245 10:21:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:07.245 10:21:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:07.245 10:21:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:07.245 10:21:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:07.245 10:21:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:07.245 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:07.245 10:21:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:07.245 10:21:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:07.245 10:21:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.245 10:21:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.245 10:21:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:07.245 10:21:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:07.245 10:21:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:07.245 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:07.245 10:21:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:07.245 10:21:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:07.245 10:21:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.245 10:21:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.245 10:21:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:07.245 10:21:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:07.245 10:21:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:07.245 10:21:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:07.245 10:21:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:07.245 10:21:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.245 10:21:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:07.245 10:21:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.245 10:21:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:07.245 Found net devices under 0000:af:00.0: cvl_0_0 00:25:07.245 10:21:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.245 10:21:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:07.245 10:21:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.245 10:21:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:07.245 10:21:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.245 10:21:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:07.245 Found net devices under 0000:af:00.1: cvl_0_1 00:25:07.245 10:21:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.245 10:21:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:07.245 10:21:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:07.245 10:21:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:07.245 10:21:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:07.245 10:21:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:07.245 10:21:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:07.245 10:21:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:07.245 10:21:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:07.245 10:21:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:07.245 10:21:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:07.245 10:21:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:07.245 10:21:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:07.245 10:21:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:07.245 10:21:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:07.245 10:21:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:07.245 10:21:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:07.245 10:21:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:07.245 10:21:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:07.245 10:21:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:07.245 10:21:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:07.245 10:21:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:07.245 10:21:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:07.245 10:21:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:07.245 10:21:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:07.245 10:21:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:07.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:07.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:25:07.245 00:25:07.245 --- 10.0.0.2 ping statistics --- 00:25:07.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.245 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:25:07.245 10:21:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:07.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:07.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:25:07.245 00:25:07.245 --- 10.0.0.1 ping statistics --- 00:25:07.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.245 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:25:07.245 10:21:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:07.245 10:21:39 -- nvmf/common.sh@410 -- # return 0 00:25:07.245 10:21:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:07.245 10:21:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:07.245 10:21:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:07.245 10:21:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:07.245 10:21:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:07.245 10:21:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:07.245 10:21:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:07.245 10:21:39 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:07.245 10:21:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:07.245 10:21:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:07.245 10:21:39 -- common/autotest_common.sh@10 -- # set +x 00:25:07.245 10:21:39 -- nvmf/common.sh@469 -- # nvmfpid=3534595 00:25:07.245 10:21:39 -- nvmf/common.sh@470 -- # waitforlisten 3534595 00:25:07.245 10:21:39 -- common/autotest_common.sh@819 -- # '[' -z 3534595 ']' 00:25:07.245 10:21:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.245 10:21:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:07.245 10:21:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.245 10:21:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:07.245 10:21:39 -- common/autotest_common.sh@10 -- # set +x 00:25:07.245 10:21:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:07.245 [2024-04-17 10:21:39.651977] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:07.245 [2024-04-17 10:21:39.652030] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:07.245 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.245 [2024-04-17 10:21:39.737576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:07.245 [2024-04-17 10:21:39.826222] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:07.245 [2024-04-17 10:21:39.826365] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:07.245 [2024-04-17 10:21:39.826376] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:07.245 [2024-04-17 10:21:39.826386] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:07.245 [2024-04-17 10:21:39.826440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:07.245 [2024-04-17 10:21:39.826540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:07.245 [2024-04-17 10:21:39.826666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.245 [2024-04-17 10:21:39.826666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:07.245 10:21:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:07.245 10:21:40 -- common/autotest_common.sh@852 -- # return 0 00:25:07.245 10:21:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:07.245 10:21:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:07.245 10:21:40 -- common/autotest_common.sh@10 -- # set +x 00:25:07.245 10:21:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:07.245 10:21:40 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:07.245 10:21:40 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:07.245 10:21:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.245 10:21:40 -- common/autotest_common.sh@10 -- # set +x 00:25:07.245 Malloc0 00:25:07.245 10:21:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.245 10:21:40 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:07.245 10:21:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.245 10:21:40 -- common/autotest_common.sh@10 -- # set +x 00:25:07.504 Delay0 00:25:07.504 10:21:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.504 10:21:40 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:07.504 10:21:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.504 10:21:40 -- common/autotest_common.sh@10 -- # set +x 00:25:07.504 [2024-04-17 10:21:40.587660] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.504 10:21:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.504 10:21:40 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:07.504 10:21:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.504 10:21:40 -- common/autotest_common.sh@10 -- # set +x 00:25:07.504 10:21:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.504 10:21:40 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:07.504 10:21:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.504 10:21:40 -- common/autotest_common.sh@10 -- # set +x 00:25:07.504 10:21:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.504 10:21:40 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:07.504 10:21:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.504 10:21:40 -- common/autotest_common.sh@10 -- # set +x 00:25:07.504 [2024-04-17 10:21:40.615917] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.504 10:21:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.504 10:21:40 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:08.879 10:21:41 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:08.879 10:21:41 -- common/autotest_common.sh@1177 -- # local i=0 00:25:08.879 10:21:41 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:25:08.879 10:21:41 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:25:08.879 10:21:41 -- common/autotest_common.sh@1184 -- # sleep 2 00:25:10.779 10:21:43 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:25:10.779 10:21:43 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:25:10.779 10:21:43 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:25:10.779 10:21:43 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:25:10.780 10:21:43 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:25:10.780 10:21:43 -- common/autotest_common.sh@1187 -- # return 0 00:25:10.780 10:21:43 -- target/initiator_timeout.sh@35 -- # fio_pid=3535385 00:25:10.780 10:21:43 -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:10.780 10:21:43 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:10.780 [global] 00:25:10.780 thread=1 00:25:10.780 invalidate=1 00:25:10.780 rw=write 00:25:10.780 time_based=1 00:25:10.780 runtime=60 00:25:10.780 ioengine=libaio 00:25:10.780 direct=1 00:25:10.780 bs=4096 00:25:10.780 iodepth=1 00:25:10.780 norandommap=0 00:25:10.780 numjobs=1 00:25:10.780 00:25:10.780 verify_dump=1 00:25:10.780 verify_backlog=512 00:25:10.780 verify_state_save=0 00:25:10.780 do_verify=1 00:25:10.780 verify=crc32c-intel 00:25:10.780 [job0] 00:25:10.780 filename=/dev/nvme0n1 00:25:10.780 Could not set queue depth (nvme0n1) 00:25:11.038 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:11.038 fio-3.35 00:25:11.038 Starting 1 thread 00:25:14.322 10:21:46 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:14.322 10:21:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.322 10:21:46 -- common/autotest_common.sh@10 -- # set +x 00:25:14.322 true 00:25:14.322 10:21:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.322 10:21:46 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:14.322 10:21:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.322 10:21:46 -- common/autotest_common.sh@10 -- # set +x 00:25:14.322 true 00:25:14.322 10:21:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.322 10:21:46 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:14.322 10:21:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.322 10:21:46 -- common/autotest_common.sh@10 -- # set +x 00:25:14.322 true 00:25:14.322 10:21:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.322 10:21:46 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:14.322 10:21:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.322 10:21:46 -- common/autotest_common.sh@10 -- # set +x 00:25:14.322 true 00:25:14.322 10:21:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.322 10:21:46 -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:16.854 10:21:49 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:16.854 10:21:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:16.854 10:21:49 -- common/autotest_common.sh@10 -- # set +x 00:25:16.854 true 00:25:16.854 10:21:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:16.854 10:21:49 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:16.854 10:21:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:16.854 10:21:49 -- common/autotest_common.sh@10 -- # set +x 00:25:16.854 true 00:25:16.854 10:21:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:16.854 10:21:49 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:16.854 10:21:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:16.854 10:21:49 -- common/autotest_common.sh@10 -- # set +x 00:25:16.854 true 00:25:16.854 10:21:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:16.854 10:21:49 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:16.854 10:21:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:16.854 10:21:49 -- common/autotest_common.sh@10 -- # set +x 00:25:16.854 true 00:25:16.854 10:21:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:16.854 10:21:50 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:16.854 10:21:50 -- target/initiator_timeout.sh@54 -- # wait 3535385 00:26:13.088 00:26:13.088 job0: (groupid=0, jobs=1): err= 0: pid=3535560: Wed Apr 17 10:22:44 2024 00:26:13.088 read: IOPS=10, BW=43.6KiB/s (44.7kB/s)(2620KiB/60029msec) 00:26:13.088 slat (nsec): min=8250, max=97431, avg=18443.76, stdev=7030.66 00:26:13.088 clat (usec): min=324, max=41809k, avg=91192.78, stdev=1632654.17 00:26:13.088 lat (usec): min=333, max=41809k, avg=91211.22, stdev=1632654.36 00:26:13.088 clat percentiles (usec): 00:26:13.088 | 1.00th=[ 338], 5.00th=[ 347], 10.00th=[ 355], 00:26:13.088 | 20.00th=[ 375], 30.00th=[ 474], 40.00th=[ 40633], 00:26:13.088 | 50.00th=[ 41157], 60.00th=[ 41157], 70.00th=[ 41157], 00:26:13.088 | 80.00th=[ 41157], 90.00th=[ 41157], 95.00th=[ 41157], 00:26:13.088 | 99.00th=[ 41681], 99.50th=[ 43254], 99.90th=[17112761], 00:26:13.088 | 99.95th=[17112761], 99.99th=[17112761] 00:26:13.088 write: IOPS=17, BW=68.2KiB/s (69.9kB/s)(4096KiB/60029msec); 0 zone resets 00:26:13.088 slat (nsec): min=9985, max=46389, avg=11862.66, stdev=2376.88 00:26:13.088 clat (usec): min=216, max=508, avg=258.77, stdev=25.18 00:26:13.088 lat (usec): min=228, max=549, avg=270.63, stdev=25.68 00:26:13.088 clat percentiles (usec): 00:26:13.088 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 239], 00:26:13.088 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 262], 00:26:13.088 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 306], 00:26:13.088 | 99.00th=[ 318], 99.50th=[ 322], 99.90th=[ 441], 99.95th=[ 510], 00:26:13.088 | 99.99th=[ 510] 00:26:13.088 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=2 00:26:13.088 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:26:13.088 lat (usec) : 250=27.04%, 500=46.10%, 750=0.83% 00:26:13.088 lat (msec) : 2=0.06%, 50=25.91%, >=2000=0.06% 00:26:13.088 cpu : usr=0.03%, sys=0.06%, ctx=1680, majf=0, minf=2 00:26:13.088 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:13.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.088 issued rwts: total=655,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.088 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:13.088 00:26:13.088 Run status group 0 (all jobs): 00:26:13.088 READ: bw=43.6KiB/s (44.7kB/s), 43.6KiB/s-43.6KiB/s (44.7kB/s-44.7kB/s), io=2620KiB (2683kB), run=60029-60029msec 00:26:13.088 WRITE: bw=68.2KiB/s (69.9kB/s), 68.2KiB/s-68.2KiB/s (69.9kB/s-69.9kB/s), io=4096KiB (4194kB), run=60029-60029msec 00:26:13.088 00:26:13.088 Disk stats (read/write): 00:26:13.088 nvme0n1: ios=750/1024, merge=0/0, ticks=18813/256, in_queue=19069, util=99.61% 00:26:13.088 10:22:44 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:13.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:13.088 10:22:44 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:13.088 10:22:44 -- common/autotest_common.sh@1198 -- # local i=0 00:26:13.088 10:22:44 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:26:13.088 10:22:44 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:13.088 10:22:44 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:26:13.088 10:22:44 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:13.088 10:22:44 -- common/autotest_common.sh@1210 -- # return 0 00:26:13.088 10:22:44 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:13.088 10:22:44 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:13.088 nvmf hotplug test: fio successful as expected 00:26:13.088 10:22:44 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:13.088 10:22:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:13.088 10:22:44 -- common/autotest_common.sh@10 -- # set +x 00:26:13.088 10:22:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:13.088 10:22:44 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:13.088 10:22:44 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:13.088 10:22:44 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:13.088 10:22:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:13.088 10:22:44 -- nvmf/common.sh@116 -- # sync 00:26:13.088 10:22:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:13.088 10:22:44 -- nvmf/common.sh@119 -- # set +e 00:26:13.088 10:22:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:13.088 10:22:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:13.088 rmmod nvme_tcp 00:26:13.088 rmmod nvme_fabrics 00:26:13.088 rmmod nvme_keyring 00:26:13.088 10:22:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:13.088 10:22:44 -- nvmf/common.sh@123 -- # set -e 00:26:13.088 10:22:44 -- nvmf/common.sh@124 -- # return 0 00:26:13.088 10:22:44 -- nvmf/common.sh@477 -- # '[' -n 3534595 ']' 00:26:13.088 10:22:44 -- nvmf/common.sh@478 -- # killprocess 3534595 00:26:13.088 10:22:44 -- common/autotest_common.sh@926 -- # '[' -z 3534595 ']' 00:26:13.088 10:22:44 -- common/autotest_common.sh@930 -- # kill -0 3534595 00:26:13.088 10:22:44 -- common/autotest_common.sh@931 -- # uname 00:26:13.088 10:22:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:13.088 10:22:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3534595 00:26:13.088 10:22:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:13.088 10:22:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:13.088 10:22:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3534595' 00:26:13.088 killing process with pid 3534595 00:26:13.088 10:22:44 -- common/autotest_common.sh@945 -- # kill 3534595 00:26:13.088 10:22:44 -- common/autotest_common.sh@950 -- # wait 3534595 00:26:13.088 10:22:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:13.088 10:22:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:13.088 10:22:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:13.088 10:22:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:13.088 10:22:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:13.088 10:22:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.088 10:22:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:13.088 10:22:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.025 10:22:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:14.025 00:26:14.025 real 1m13.182s 00:26:14.025 user 4m30.963s 00:26:14.025 sys 0m5.974s 00:26:14.025 10:22:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:14.025 10:22:47 -- common/autotest_common.sh@10 -- # set +x 00:26:14.025 ************************************ 00:26:14.025 END TEST nvmf_initiator_timeout 00:26:14.025 ************************************ 00:26:14.025 10:22:47 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:26:14.025 10:22:47 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:26:14.025 10:22:47 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:26:14.025 10:22:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:14.026 10:22:47 -- common/autotest_common.sh@10 -- # set +x 00:26:19.299 10:22:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:19.299 10:22:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:19.299 10:22:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:19.299 10:22:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:19.299 10:22:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:19.299 10:22:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:19.299 10:22:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:19.299 10:22:52 -- nvmf/common.sh@294 -- # net_devs=() 00:26:19.299 10:22:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:19.299 10:22:52 -- nvmf/common.sh@295 -- # e810=() 00:26:19.299 10:22:52 -- nvmf/common.sh@295 -- # local -ga e810 00:26:19.299 10:22:52 -- nvmf/common.sh@296 -- # x722=() 00:26:19.299 10:22:52 -- nvmf/common.sh@296 -- # local -ga x722 00:26:19.299 10:22:52 -- nvmf/common.sh@297 -- # mlx=() 00:26:19.299 10:22:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:19.299 10:22:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:19.299 10:22:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:19.299 10:22:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:19.299 10:22:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:19.299 10:22:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:19.299 10:22:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:19.299 10:22:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:19.299 10:22:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:19.299 10:22:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:19.299 10:22:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:19.299 10:22:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:19.299 10:22:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:19.299 10:22:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:19.299 10:22:52 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:19.299 10:22:52 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:19.299 10:22:52 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:19.299 10:22:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:19.299 10:22:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:19.299 10:22:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:19.299 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:19.299 10:22:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:19.299 10:22:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:19.299 10:22:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.299 10:22:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.299 10:22:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:19.299 10:22:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:19.299 10:22:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:19.299 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:19.299 10:22:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:19.299 10:22:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:19.299 10:22:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.299 10:22:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.299 10:22:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:19.299 10:22:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:19.299 10:22:52 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:19.299 10:22:52 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:19.299 10:22:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:19.299 10:22:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.299 10:22:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:19.299 10:22:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.299 10:22:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:19.299 Found net devices under 0000:af:00.0: cvl_0_0 00:26:19.299 10:22:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.299 10:22:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:19.299 10:22:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.299 10:22:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:19.299 10:22:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.299 10:22:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:19.299 Found net devices under 0000:af:00.1: cvl_0_1 00:26:19.299 10:22:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.299 10:22:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:19.299 10:22:52 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:19.299 10:22:52 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:26:19.299 10:22:52 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:19.299 10:22:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:19.299 10:22:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:19.299 10:22:52 -- common/autotest_common.sh@10 -- # set +x 00:26:19.299 ************************************ 00:26:19.299 START TEST nvmf_perf_adq 00:26:19.299 ************************************ 00:26:19.299 10:22:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:19.299 * Looking for test storage... 00:26:19.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:19.299 10:22:52 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:19.299 10:22:52 -- nvmf/common.sh@7 -- # uname -s 00:26:19.299 10:22:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:19.299 10:22:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:19.299 10:22:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:19.299 10:22:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:19.299 10:22:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:19.299 10:22:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:19.299 10:22:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:19.299 10:22:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:19.299 10:22:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:19.299 10:22:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:19.299 10:22:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:19.299 10:22:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:19.299 10:22:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:19.299 10:22:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:19.299 10:22:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:19.299 10:22:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:19.299 10:22:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:19.299 10:22:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:19.299 10:22:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:19.299 10:22:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.299 10:22:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.299 10:22:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.299 10:22:52 -- paths/export.sh@5 -- # export PATH 00:26:19.299 10:22:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.299 10:22:52 -- nvmf/common.sh@46 -- # : 0 00:26:19.299 10:22:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:19.299 10:22:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:19.299 10:22:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:19.299 10:22:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:19.299 10:22:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:19.299 10:22:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:19.299 10:22:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:19.299 10:22:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:19.299 10:22:52 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:19.299 10:22:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:19.299 10:22:52 -- common/autotest_common.sh@10 -- # set +x 00:26:24.572 10:22:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:24.572 10:22:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:24.572 10:22:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:24.572 10:22:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:24.572 10:22:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:24.572 10:22:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:24.572 10:22:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:24.572 10:22:57 -- nvmf/common.sh@294 -- # net_devs=() 00:26:24.572 10:22:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:24.572 10:22:57 -- nvmf/common.sh@295 -- # e810=() 00:26:24.572 10:22:57 -- nvmf/common.sh@295 -- # local -ga e810 00:26:24.572 10:22:57 -- nvmf/common.sh@296 -- # x722=() 00:26:24.572 10:22:57 -- nvmf/common.sh@296 -- # local -ga x722 00:26:24.572 10:22:57 -- nvmf/common.sh@297 -- # mlx=() 00:26:24.572 10:22:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:24.572 10:22:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:24.572 10:22:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:24.572 10:22:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:24.572 10:22:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:24.572 10:22:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:24.572 10:22:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:24.572 10:22:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:24.572 10:22:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:24.572 10:22:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:24.572 10:22:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:24.572 10:22:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:24.572 10:22:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:24.572 10:22:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:24.572 10:22:57 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:24.572 10:22:57 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:24.572 10:22:57 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:24.572 10:22:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:24.572 10:22:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:24.572 10:22:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:24.572 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:24.572 10:22:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:24.572 10:22:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:24.572 10:22:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.572 10:22:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.572 10:22:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:24.572 10:22:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:24.572 10:22:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:24.572 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:24.572 10:22:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:24.572 10:22:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:24.572 10:22:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.572 10:22:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.572 10:22:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:24.572 10:22:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:24.572 10:22:57 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:24.572 10:22:57 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:24.572 10:22:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:24.572 10:22:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.572 10:22:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:24.572 10:22:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.572 10:22:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:24.572 Found net devices under 0000:af:00.0: cvl_0_0 00:26:24.572 10:22:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.572 10:22:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:24.572 10:22:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.572 10:22:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:24.572 10:22:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.572 10:22:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:24.572 Found net devices under 0000:af:00.1: cvl_0_1 00:26:24.572 10:22:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.573 10:22:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:24.573 10:22:57 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:24.573 10:22:57 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:24.573 10:22:57 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:24.573 10:22:57 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:26:24.573 10:22:57 -- target/perf_adq.sh@52 -- # rmmod ice 00:26:25.953 10:22:59 -- target/perf_adq.sh@53 -- # modprobe ice 00:26:27.858 10:23:01 -- target/perf_adq.sh@54 -- # sleep 5 00:26:33.130 10:23:06 -- target/perf_adq.sh@67 -- # nvmftestinit 00:26:33.130 10:23:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:33.130 10:23:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.130 10:23:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:33.130 10:23:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:33.130 10:23:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:33.130 10:23:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.130 10:23:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:33.130 10:23:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.130 10:23:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:33.130 10:23:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:33.130 10:23:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:33.130 10:23:06 -- common/autotest_common.sh@10 -- # set +x 00:26:33.130 10:23:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:33.130 10:23:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:33.130 10:23:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:33.130 10:23:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:33.130 10:23:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:33.130 10:23:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:33.130 10:23:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:33.130 10:23:06 -- nvmf/common.sh@294 -- # net_devs=() 00:26:33.130 10:23:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:33.130 10:23:06 -- nvmf/common.sh@295 -- # e810=() 00:26:33.130 10:23:06 -- nvmf/common.sh@295 -- # local -ga e810 00:26:33.130 10:23:06 -- nvmf/common.sh@296 -- # x722=() 00:26:33.130 10:23:06 -- nvmf/common.sh@296 -- # local -ga x722 00:26:33.130 10:23:06 -- nvmf/common.sh@297 -- # mlx=() 00:26:33.130 10:23:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:33.130 10:23:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:33.130 10:23:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:33.130 10:23:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:33.130 10:23:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:33.130 10:23:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:33.130 10:23:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:33.130 10:23:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:33.130 10:23:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:33.130 10:23:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:33.130 10:23:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:33.130 10:23:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:33.130 10:23:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:33.130 10:23:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:33.130 10:23:06 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:33.130 10:23:06 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:33.130 10:23:06 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:33.130 10:23:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:33.130 10:23:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:33.130 10:23:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:33.130 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:33.130 10:23:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:33.130 10:23:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:33.130 10:23:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.130 10:23:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.130 10:23:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:33.130 10:23:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:33.130 10:23:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:33.130 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:33.130 10:23:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:33.130 10:23:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:33.130 10:23:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.130 10:23:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.130 10:23:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:33.130 10:23:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:33.130 10:23:06 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:33.130 10:23:06 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:33.130 10:23:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:33.130 10:23:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.130 10:23:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:33.130 10:23:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.130 10:23:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:33.130 Found net devices under 0000:af:00.0: cvl_0_0 00:26:33.130 10:23:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.130 10:23:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:33.130 10:23:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.130 10:23:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:33.130 10:23:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.130 10:23:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:33.130 Found net devices under 0000:af:00.1: cvl_0_1 00:26:33.130 10:23:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.130 10:23:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:33.130 10:23:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:33.131 10:23:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:33.131 10:23:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:33.131 10:23:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:33.131 10:23:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:33.131 10:23:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:33.131 10:23:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:33.131 10:23:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:33.131 10:23:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:33.131 10:23:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:33.131 10:23:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:33.131 10:23:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:33.131 10:23:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:33.131 10:23:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:33.131 10:23:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:33.131 10:23:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:33.131 10:23:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:33.131 10:23:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:33.131 10:23:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:33.131 10:23:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:33.131 10:23:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:33.131 10:23:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:33.131 10:23:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:33.131 10:23:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:33.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:33.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:26:33.131 00:26:33.131 --- 10.0.0.2 ping statistics --- 00:26:33.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.131 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:26:33.131 10:23:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:33.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:33.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:26:33.131 00:26:33.131 --- 10.0.0.1 ping statistics --- 00:26:33.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.131 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:26:33.131 10:23:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:33.131 10:23:06 -- nvmf/common.sh@410 -- # return 0 00:26:33.131 10:23:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:33.131 10:23:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:33.131 10:23:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:33.131 10:23:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:33.131 10:23:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:33.131 10:23:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:33.131 10:23:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:33.131 10:23:06 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:33.131 10:23:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:33.131 10:23:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:33.131 10:23:06 -- common/autotest_common.sh@10 -- # set +x 00:26:33.131 10:23:06 -- nvmf/common.sh@469 -- # nvmfpid=3554450 00:26:33.131 10:23:06 -- nvmf/common.sh@470 -- # waitforlisten 3554450 00:26:33.131 10:23:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:33.131 10:23:06 -- common/autotest_common.sh@819 -- # '[' -z 3554450 ']' 00:26:33.131 10:23:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.131 10:23:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:33.131 10:23:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:33.131 10:23:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:33.131 10:23:06 -- common/autotest_common.sh@10 -- # set +x 00:26:33.131 [2024-04-17 10:23:06.415951] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:33.131 [2024-04-17 10:23:06.416002] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:33.131 EAL: No free 2048 kB hugepages reported on node 1 00:26:33.391 [2024-04-17 10:23:06.500261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:33.391 [2024-04-17 10:23:06.587988] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:33.391 [2024-04-17 10:23:06.588130] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:33.391 [2024-04-17 10:23:06.588141] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:33.391 [2024-04-17 10:23:06.588150] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:33.391 [2024-04-17 10:23:06.588196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.391 [2024-04-17 10:23:06.588286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:33.391 [2024-04-17 10:23:06.588404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:33.391 [2024-04-17 10:23:06.588404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.328 10:23:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:34.328 10:23:07 -- common/autotest_common.sh@852 -- # return 0 00:26:34.328 10:23:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:34.328 10:23:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:34.328 10:23:07 -- common/autotest_common.sh@10 -- # set +x 00:26:34.328 10:23:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:34.328 10:23:07 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:26:34.328 10:23:07 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:34.328 10:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:34.328 10:23:07 -- common/autotest_common.sh@10 -- # set +x 00:26:34.328 10:23:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:34.328 10:23:07 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:26:34.328 10:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:34.328 10:23:07 -- common/autotest_common.sh@10 -- # set +x 00:26:34.328 10:23:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:34.328 10:23:07 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:34.328 10:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:34.328 10:23:07 -- common/autotest_common.sh@10 -- # set +x 00:26:34.328 [2024-04-17 10:23:07.502423] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:34.328 10:23:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:34.328 10:23:07 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:34.328 10:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:34.328 10:23:07 -- common/autotest_common.sh@10 -- # set +x 00:26:34.328 Malloc1 00:26:34.328 10:23:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:34.328 10:23:07 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:34.328 10:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:34.328 10:23:07 -- common/autotest_common.sh@10 -- # set +x 00:26:34.328 10:23:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:34.328 10:23:07 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:34.328 10:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:34.328 10:23:07 -- common/autotest_common.sh@10 -- # set +x 00:26:34.328 10:23:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:34.328 10:23:07 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:34.328 10:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:34.328 10:23:07 -- common/autotest_common.sh@10 -- # set +x 00:26:34.328 [2024-04-17 10:23:07.558311] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:34.328 10:23:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:34.328 10:23:07 -- target/perf_adq.sh@73 -- # perfpid=3554732 00:26:34.328 10:23:07 -- target/perf_adq.sh@74 -- # sleep 2 00:26:34.328 10:23:07 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:34.328 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.875 10:23:09 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:26:36.875 10:23:09 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:36.875 10:23:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.875 10:23:09 -- target/perf_adq.sh@76 -- # wc -l 00:26:36.875 10:23:09 -- common/autotest_common.sh@10 -- # set +x 00:26:36.875 10:23:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.875 10:23:09 -- target/perf_adq.sh@76 -- # count=4 00:26:36.875 10:23:09 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:26:36.875 10:23:09 -- target/perf_adq.sh@81 -- # wait 3554732 00:26:45.021 Initializing NVMe Controllers 00:26:45.021 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:45.021 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:45.021 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:45.021 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:45.021 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:45.021 Initialization complete. Launching workers. 00:26:45.021 ======================================================== 00:26:45.021 Latency(us) 00:26:45.021 Device Information : IOPS MiB/s Average min max 00:26:45.021 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8403.39 32.83 7618.29 1453.34 12390.67 00:26:45.021 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10297.99 40.23 6214.86 1316.28 10189.83 00:26:45.021 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8536.59 33.35 7496.62 1555.32 13222.23 00:26:45.021 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8330.20 32.54 7683.51 1512.20 11604.63 00:26:45.021 ======================================================== 00:26:45.021 Total : 35568.18 138.94 7198.03 1316.28 13222.23 00:26:45.021 00:26:45.021 10:23:17 -- target/perf_adq.sh@82 -- # nvmftestfini 00:26:45.021 10:23:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:45.021 10:23:17 -- nvmf/common.sh@116 -- # sync 00:26:45.021 10:23:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:45.021 10:23:17 -- nvmf/common.sh@119 -- # set +e 00:26:45.021 10:23:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:45.021 10:23:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:45.021 rmmod nvme_tcp 00:26:45.021 rmmod nvme_fabrics 00:26:45.021 rmmod nvme_keyring 00:26:45.021 10:23:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:45.021 10:23:17 -- nvmf/common.sh@123 -- # set -e 00:26:45.021 10:23:17 -- nvmf/common.sh@124 -- # return 0 00:26:45.021 10:23:17 -- nvmf/common.sh@477 -- # '[' -n 3554450 ']' 00:26:45.021 10:23:17 -- nvmf/common.sh@478 -- # killprocess 3554450 00:26:45.021 10:23:17 -- common/autotest_common.sh@926 -- # '[' -z 3554450 ']' 00:26:45.021 10:23:17 -- common/autotest_common.sh@930 -- # kill -0 3554450 00:26:45.021 10:23:17 -- common/autotest_common.sh@931 -- # uname 00:26:45.021 10:23:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:45.021 10:23:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3554450 00:26:45.021 10:23:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:45.021 10:23:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:45.021 10:23:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3554450' 00:26:45.021 killing process with pid 3554450 00:26:45.021 10:23:17 -- common/autotest_common.sh@945 -- # kill 3554450 00:26:45.021 10:23:17 -- common/autotest_common.sh@950 -- # wait 3554450 00:26:45.021 10:23:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:45.021 10:23:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:45.021 10:23:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:45.021 10:23:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:45.021 10:23:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:45.021 10:23:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.021 10:23:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:45.021 10:23:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.926 10:23:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:46.926 10:23:20 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:26:46.926 10:23:20 -- target/perf_adq.sh@52 -- # rmmod ice 00:26:48.303 10:23:21 -- target/perf_adq.sh@53 -- # modprobe ice 00:26:50.252 10:23:23 -- target/perf_adq.sh@54 -- # sleep 5 00:26:55.524 10:23:28 -- target/perf_adq.sh@87 -- # nvmftestinit 00:26:55.524 10:23:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:55.524 10:23:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:55.524 10:23:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:55.524 10:23:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:55.524 10:23:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:55.524 10:23:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.524 10:23:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:55.524 10:23:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.524 10:23:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:55.524 10:23:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:55.524 10:23:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:55.524 10:23:28 -- common/autotest_common.sh@10 -- # set +x 00:26:55.524 10:23:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:55.524 10:23:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:55.524 10:23:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:55.524 10:23:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:55.524 10:23:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:55.524 10:23:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:55.524 10:23:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:55.524 10:23:28 -- nvmf/common.sh@294 -- # net_devs=() 00:26:55.524 10:23:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:55.524 10:23:28 -- nvmf/common.sh@295 -- # e810=() 00:26:55.524 10:23:28 -- nvmf/common.sh@295 -- # local -ga e810 00:26:55.524 10:23:28 -- nvmf/common.sh@296 -- # x722=() 00:26:55.524 10:23:28 -- nvmf/common.sh@296 -- # local -ga x722 00:26:55.524 10:23:28 -- nvmf/common.sh@297 -- # mlx=() 00:26:55.524 10:23:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:55.524 10:23:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:55.524 10:23:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:55.524 10:23:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:55.524 10:23:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:55.524 10:23:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:55.524 10:23:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:55.524 10:23:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:55.524 10:23:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:55.524 10:23:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:55.524 10:23:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:55.524 10:23:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:55.524 10:23:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:55.524 10:23:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:55.524 10:23:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:55.524 10:23:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:55.524 10:23:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:55.524 10:23:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:55.524 10:23:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:55.524 10:23:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:55.524 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:55.524 10:23:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:55.524 10:23:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:55.524 10:23:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.524 10:23:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.524 10:23:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:55.524 10:23:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:55.524 10:23:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:55.524 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:55.524 10:23:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:55.524 10:23:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:55.524 10:23:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.524 10:23:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.524 10:23:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:55.524 10:23:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:55.524 10:23:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:55.524 10:23:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:55.524 10:23:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:55.524 10:23:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.524 10:23:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:55.524 10:23:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.524 10:23:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:55.524 Found net devices under 0000:af:00.0: cvl_0_0 00:26:55.524 10:23:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.524 10:23:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:55.524 10:23:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.524 10:23:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:55.524 10:23:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.524 10:23:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:55.524 Found net devices under 0000:af:00.1: cvl_0_1 00:26:55.524 10:23:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.524 10:23:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:55.524 10:23:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:55.524 10:23:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:55.524 10:23:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:55.524 10:23:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:55.524 10:23:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:55.524 10:23:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:55.524 10:23:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:55.524 10:23:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:55.524 10:23:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:55.524 10:23:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:55.524 10:23:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:55.524 10:23:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:55.524 10:23:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:55.524 10:23:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:55.524 10:23:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:55.524 10:23:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:55.524 10:23:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:55.524 10:23:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:55.524 10:23:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:55.524 10:23:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:55.525 10:23:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:55.525 10:23:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:55.525 10:23:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:55.525 10:23:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:55.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:55.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:26:55.784 00:26:55.784 --- 10.0.0.2 ping statistics --- 00:26:55.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.784 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:26:55.784 10:23:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:55.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:55.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:26:55.784 00:26:55.784 --- 10.0.0.1 ping statistics --- 00:26:55.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.784 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:26:55.784 10:23:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:55.784 10:23:28 -- nvmf/common.sh@410 -- # return 0 00:26:55.784 10:23:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:55.784 10:23:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:55.784 10:23:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:55.784 10:23:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:55.784 10:23:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:55.784 10:23:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:55.784 10:23:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:55.784 10:23:28 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:26:55.784 10:23:28 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:55.784 10:23:28 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:55.784 10:23:28 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:55.784 net.core.busy_poll = 1 00:26:55.784 10:23:28 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:55.784 net.core.busy_read = 1 00:26:55.784 10:23:28 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:55.784 10:23:28 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:55.784 10:23:29 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:55.784 10:23:29 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:55.784 10:23:29 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:56.044 10:23:29 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:56.044 10:23:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:56.044 10:23:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:56.044 10:23:29 -- common/autotest_common.sh@10 -- # set +x 00:26:56.044 10:23:29 -- nvmf/common.sh@469 -- # nvmfpid=3558830 00:26:56.044 10:23:29 -- nvmf/common.sh@470 -- # waitforlisten 3558830 00:26:56.044 10:23:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:56.044 10:23:29 -- common/autotest_common.sh@819 -- # '[' -z 3558830 ']' 00:26:56.044 10:23:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.044 10:23:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:56.044 10:23:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.044 10:23:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:56.044 10:23:29 -- common/autotest_common.sh@10 -- # set +x 00:26:56.044 [2024-04-17 10:23:29.199743] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:56.044 [2024-04-17 10:23:29.199799] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:56.044 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.044 [2024-04-17 10:23:29.283663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:56.044 [2024-04-17 10:23:29.372459] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:56.044 [2024-04-17 10:23:29.372600] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:56.044 [2024-04-17 10:23:29.372611] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:56.044 [2024-04-17 10:23:29.372621] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:56.044 [2024-04-17 10:23:29.372669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.044 [2024-04-17 10:23:29.372760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:56.044 [2024-04-17 10:23:29.372864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.044 [2024-04-17 10:23:29.372864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:56.980 10:23:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:56.980 10:23:30 -- common/autotest_common.sh@852 -- # return 0 00:26:56.980 10:23:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:56.980 10:23:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:56.980 10:23:30 -- common/autotest_common.sh@10 -- # set +x 00:26:56.980 10:23:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:56.980 10:23:30 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:26:56.980 10:23:30 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:56.980 10:23:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:56.980 10:23:30 -- common/autotest_common.sh@10 -- # set +x 00:26:56.980 10:23:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:56.980 10:23:30 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:26:56.980 10:23:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:56.980 10:23:30 -- common/autotest_common.sh@10 -- # set +x 00:26:56.980 10:23:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:56.980 10:23:30 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:56.980 10:23:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:56.980 10:23:30 -- common/autotest_common.sh@10 -- # set +x 00:26:56.980 [2024-04-17 10:23:30.281018] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:56.980 10:23:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:56.980 10:23:30 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:56.980 10:23:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:56.980 10:23:30 -- common/autotest_common.sh@10 -- # set +x 00:26:56.980 Malloc1 00:26:56.980 10:23:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:56.980 10:23:30 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:56.980 10:23:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:56.980 10:23:30 -- common/autotest_common.sh@10 -- # set +x 00:26:57.239 10:23:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:57.239 10:23:30 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:57.239 10:23:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:57.239 10:23:30 -- common/autotest_common.sh@10 -- # set +x 00:26:57.239 10:23:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:57.239 10:23:30 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:57.239 10:23:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:57.239 10:23:30 -- common/autotest_common.sh@10 -- # set +x 00:26:57.239 [2024-04-17 10:23:30.328825] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:57.239 10:23:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:57.239 10:23:30 -- target/perf_adq.sh@94 -- # perfpid=3559113 00:26:57.239 10:23:30 -- target/perf_adq.sh@95 -- # sleep 2 00:26:57.239 10:23:30 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:57.239 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.145 10:23:32 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:26:59.145 10:23:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:59.145 10:23:32 -- target/perf_adq.sh@97 -- # wc -l 00:26:59.145 10:23:32 -- common/autotest_common.sh@10 -- # set +x 00:26:59.145 10:23:32 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:59.145 10:23:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:59.145 10:23:32 -- target/perf_adq.sh@97 -- # count=3 00:26:59.145 10:23:32 -- target/perf_adq.sh@98 -- # [[ 3 -lt 2 ]] 00:26:59.145 10:23:32 -- target/perf_adq.sh@103 -- # wait 3559113 00:27:07.264 Initializing NVMe Controllers 00:27:07.264 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:07.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:07.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:07.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:07.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:07.265 Initialization complete. Launching workers. 00:27:07.265 ======================================================== 00:27:07.265 Latency(us) 00:27:07.265 Device Information : IOPS MiB/s Average min max 00:27:07.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7400.56 28.91 8648.13 1193.61 54304.55 00:27:07.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6429.38 25.11 9954.59 1333.37 55460.52 00:27:07.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7028.17 27.45 9115.36 1297.78 55720.49 00:27:07.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6756.78 26.39 9472.41 1478.03 54290.92 00:27:07.265 ======================================================== 00:27:07.265 Total : 27614.89 107.87 9272.90 1193.61 55720.49 00:27:07.265 00:27:07.265 10:23:40 -- target/perf_adq.sh@104 -- # nvmftestfini 00:27:07.265 10:23:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:07.265 10:23:40 -- nvmf/common.sh@116 -- # sync 00:27:07.265 10:23:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:07.265 10:23:40 -- nvmf/common.sh@119 -- # set +e 00:27:07.265 10:23:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:07.265 10:23:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:07.265 rmmod nvme_tcp 00:27:07.265 rmmod nvme_fabrics 00:27:07.265 rmmod nvme_keyring 00:27:07.265 10:23:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:07.265 10:23:40 -- nvmf/common.sh@123 -- # set -e 00:27:07.265 10:23:40 -- nvmf/common.sh@124 -- # return 0 00:27:07.265 10:23:40 -- nvmf/common.sh@477 -- # '[' -n 3558830 ']' 00:27:07.265 10:23:40 -- nvmf/common.sh@478 -- # killprocess 3558830 00:27:07.265 10:23:40 -- common/autotest_common.sh@926 -- # '[' -z 3558830 ']' 00:27:07.265 10:23:40 -- common/autotest_common.sh@930 -- # kill -0 3558830 00:27:07.265 10:23:40 -- common/autotest_common.sh@931 -- # uname 00:27:07.265 10:23:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:07.265 10:23:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3558830 00:27:07.524 10:23:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:07.524 10:23:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:07.524 10:23:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3558830' 00:27:07.524 killing process with pid 3558830 00:27:07.524 10:23:40 -- common/autotest_common.sh@945 -- # kill 3558830 00:27:07.524 10:23:40 -- common/autotest_common.sh@950 -- # wait 3558830 00:27:07.783 10:23:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:07.783 10:23:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:07.783 10:23:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:07.783 10:23:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:07.783 10:23:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:07.783 10:23:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.783 10:23:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:07.783 10:23:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.687 10:23:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:09.687 10:23:42 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:27:09.687 00:27:09.687 real 0m50.449s 00:27:09.687 user 2m49.348s 00:27:09.687 sys 0m10.190s 00:27:09.687 10:23:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:09.687 10:23:42 -- common/autotest_common.sh@10 -- # set +x 00:27:09.687 ************************************ 00:27:09.687 END TEST nvmf_perf_adq 00:27:09.687 ************************************ 00:27:09.687 10:23:42 -- nvmf/nvmf.sh@80 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:09.687 10:23:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:09.687 10:23:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:09.687 10:23:42 -- common/autotest_common.sh@10 -- # set +x 00:27:09.687 ************************************ 00:27:09.687 START TEST nvmf_shutdown 00:27:09.687 ************************************ 00:27:09.687 10:23:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:09.946 * Looking for test storage... 00:27:09.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:09.946 10:23:43 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:09.946 10:23:43 -- nvmf/common.sh@7 -- # uname -s 00:27:09.946 10:23:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:09.946 10:23:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:09.946 10:23:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:09.946 10:23:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:09.946 10:23:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:09.946 10:23:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:09.946 10:23:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:09.946 10:23:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:09.946 10:23:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:09.946 10:23:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:09.946 10:23:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:09.946 10:23:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:09.946 10:23:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:09.946 10:23:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:09.946 10:23:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:09.946 10:23:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:09.946 10:23:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:09.946 10:23:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:09.946 10:23:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:09.946 10:23:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.946 10:23:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.946 10:23:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.946 10:23:43 -- paths/export.sh@5 -- # export PATH 00:27:09.946 10:23:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.946 10:23:43 -- nvmf/common.sh@46 -- # : 0 00:27:09.946 10:23:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:09.946 10:23:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:09.946 10:23:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:09.946 10:23:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:09.946 10:23:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:09.946 10:23:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:09.946 10:23:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:09.946 10:23:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:09.946 10:23:43 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:09.946 10:23:43 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:09.946 10:23:43 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:09.946 10:23:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:09.946 10:23:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:09.946 10:23:43 -- common/autotest_common.sh@10 -- # set +x 00:27:09.946 ************************************ 00:27:09.946 START TEST nvmf_shutdown_tc1 00:27:09.946 ************************************ 00:27:09.946 10:23:43 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:27:09.946 10:23:43 -- target/shutdown.sh@74 -- # starttarget 00:27:09.946 10:23:43 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:09.946 10:23:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:09.946 10:23:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:09.946 10:23:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:09.946 10:23:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:09.946 10:23:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:09.946 10:23:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.946 10:23:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:09.946 10:23:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.946 10:23:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:09.946 10:23:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:09.946 10:23:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:09.946 10:23:43 -- common/autotest_common.sh@10 -- # set +x 00:27:15.218 10:23:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:15.218 10:23:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:15.218 10:23:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:15.218 10:23:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:15.218 10:23:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:15.218 10:23:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:15.218 10:23:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:15.218 10:23:48 -- nvmf/common.sh@294 -- # net_devs=() 00:27:15.218 10:23:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:15.218 10:23:48 -- nvmf/common.sh@295 -- # e810=() 00:27:15.218 10:23:48 -- nvmf/common.sh@295 -- # local -ga e810 00:27:15.218 10:23:48 -- nvmf/common.sh@296 -- # x722=() 00:27:15.218 10:23:48 -- nvmf/common.sh@296 -- # local -ga x722 00:27:15.218 10:23:48 -- nvmf/common.sh@297 -- # mlx=() 00:27:15.218 10:23:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:15.218 10:23:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:15.218 10:23:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:15.219 10:23:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:15.219 10:23:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:15.219 10:23:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:15.219 10:23:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:15.219 10:23:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:15.219 10:23:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:15.219 10:23:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:15.219 10:23:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:15.219 10:23:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:15.219 10:23:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:15.219 10:23:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:15.219 10:23:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:15.219 10:23:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:15.219 10:23:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:15.219 10:23:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:15.219 10:23:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:15.219 10:23:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:15.219 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:15.219 10:23:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:15.219 10:23:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:15.219 10:23:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.219 10:23:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.219 10:23:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:15.219 10:23:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:15.219 10:23:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:15.219 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:15.219 10:23:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:15.219 10:23:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:15.219 10:23:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.219 10:23:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.219 10:23:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:15.219 10:23:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:15.219 10:23:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:15.219 10:23:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:15.219 10:23:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:15.219 10:23:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.219 10:23:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:15.219 10:23:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.219 10:23:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:15.219 Found net devices under 0000:af:00.0: cvl_0_0 00:27:15.219 10:23:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.219 10:23:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:15.219 10:23:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.219 10:23:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:15.219 10:23:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.219 10:23:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:15.219 Found net devices under 0000:af:00.1: cvl_0_1 00:27:15.219 10:23:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.219 10:23:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:15.219 10:23:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:15.219 10:23:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:15.219 10:23:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:15.219 10:23:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:15.219 10:23:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:15.219 10:23:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:15.219 10:23:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:15.219 10:23:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:15.219 10:23:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:15.219 10:23:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:15.219 10:23:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:15.219 10:23:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:15.219 10:23:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:15.219 10:23:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:15.219 10:23:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:15.219 10:23:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:15.219 10:23:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:15.219 10:23:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:15.219 10:23:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:15.219 10:23:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:15.219 10:23:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:15.219 10:23:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:15.219 10:23:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:15.219 10:23:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:15.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:15.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:27:15.219 00:27:15.219 --- 10.0.0.2 ping statistics --- 00:27:15.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.219 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:27:15.219 10:23:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:15.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:15.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:27:15.219 00:27:15.219 --- 10.0.0.1 ping statistics --- 00:27:15.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.219 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:27:15.219 10:23:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:15.219 10:23:48 -- nvmf/common.sh@410 -- # return 0 00:27:15.219 10:23:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:15.219 10:23:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:15.219 10:23:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:15.219 10:23:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:15.219 10:23:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:15.219 10:23:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:15.219 10:23:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:15.478 10:23:48 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:15.478 10:23:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:15.478 10:23:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:15.478 10:23:48 -- common/autotest_common.sh@10 -- # set +x 00:27:15.478 10:23:48 -- nvmf/common.sh@469 -- # nvmfpid=3564553 00:27:15.478 10:23:48 -- nvmf/common.sh@470 -- # waitforlisten 3564553 00:27:15.478 10:23:48 -- common/autotest_common.sh@819 -- # '[' -z 3564553 ']' 00:27:15.478 10:23:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.478 10:23:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:15.478 10:23:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:15.478 10:23:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:15.478 10:23:48 -- common/autotest_common.sh@10 -- # set +x 00:27:15.478 10:23:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:15.478 [2024-04-17 10:23:48.618288] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:15.478 [2024-04-17 10:23:48.618342] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:15.478 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.478 [2024-04-17 10:23:48.696691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:15.478 [2024-04-17 10:23:48.783998] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:15.478 [2024-04-17 10:23:48.784141] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:15.478 [2024-04-17 10:23:48.784152] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:15.478 [2024-04-17 10:23:48.784161] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:15.478 [2024-04-17 10:23:48.784265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:15.478 [2024-04-17 10:23:48.784383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:15.478 [2024-04-17 10:23:48.784494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.478 [2024-04-17 10:23:48.784494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:16.416 10:23:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:16.416 10:23:49 -- common/autotest_common.sh@852 -- # return 0 00:27:16.416 10:23:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:16.416 10:23:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:16.416 10:23:49 -- common/autotest_common.sh@10 -- # set +x 00:27:16.416 10:23:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:16.416 10:23:49 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:16.416 10:23:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:16.416 10:23:49 -- common/autotest_common.sh@10 -- # set +x 00:27:16.416 [2024-04-17 10:23:49.588440] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:16.416 10:23:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:16.416 10:23:49 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:16.416 10:23:49 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:16.416 10:23:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:16.416 10:23:49 -- common/autotest_common.sh@10 -- # set +x 00:27:16.416 10:23:49 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:16.416 10:23:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:16.416 10:23:49 -- target/shutdown.sh@28 -- # cat 00:27:16.416 10:23:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:16.416 10:23:49 -- target/shutdown.sh@28 -- # cat 00:27:16.416 10:23:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:16.416 10:23:49 -- target/shutdown.sh@28 -- # cat 00:27:16.416 10:23:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:16.416 10:23:49 -- target/shutdown.sh@28 -- # cat 00:27:16.416 10:23:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:16.416 10:23:49 -- target/shutdown.sh@28 -- # cat 00:27:16.416 10:23:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:16.416 10:23:49 -- target/shutdown.sh@28 -- # cat 00:27:16.416 10:23:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:16.416 10:23:49 -- target/shutdown.sh@28 -- # cat 00:27:16.416 10:23:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:16.416 10:23:49 -- target/shutdown.sh@28 -- # cat 00:27:16.416 10:23:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:16.416 10:23:49 -- target/shutdown.sh@28 -- # cat 00:27:16.416 10:23:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:16.416 10:23:49 -- target/shutdown.sh@28 -- # cat 00:27:16.416 10:23:49 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:16.416 10:23:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:16.416 10:23:49 -- common/autotest_common.sh@10 -- # set +x 00:27:16.416 Malloc1 00:27:16.416 [2024-04-17 10:23:49.688412] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:16.416 Malloc2 00:27:16.675 Malloc3 00:27:16.675 Malloc4 00:27:16.675 Malloc5 00:27:16.675 Malloc6 00:27:16.675 Malloc7 00:27:16.675 Malloc8 00:27:16.936 Malloc9 00:27:16.936 Malloc10 00:27:16.936 10:23:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:16.936 10:23:50 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:16.936 10:23:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:16.936 10:23:50 -- common/autotest_common.sh@10 -- # set +x 00:27:16.936 10:23:50 -- target/shutdown.sh@78 -- # perfpid=3564872 00:27:16.936 10:23:50 -- target/shutdown.sh@79 -- # waitforlisten 3564872 /var/tmp/bdevperf.sock 00:27:16.936 10:23:50 -- common/autotest_common.sh@819 -- # '[' -z 3564872 ']' 00:27:16.936 10:23:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:16.936 10:23:50 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:16.936 10:23:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:16.936 10:23:50 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:16.936 10:23:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:16.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:16.936 10:23:50 -- nvmf/common.sh@520 -- # config=() 00:27:16.936 10:23:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:16.936 10:23:50 -- nvmf/common.sh@520 -- # local subsystem config 00:27:16.936 10:23:50 -- common/autotest_common.sh@10 -- # set +x 00:27:16.936 10:23:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:16.936 10:23:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:16.936 { 00:27:16.936 "params": { 00:27:16.936 "name": "Nvme$subsystem", 00:27:16.936 "trtype": "$TEST_TRANSPORT", 00:27:16.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.936 "adrfam": "ipv4", 00:27:16.936 "trsvcid": "$NVMF_PORT", 00:27:16.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.936 "hdgst": ${hdgst:-false}, 00:27:16.936 "ddgst": ${ddgst:-false} 00:27:16.936 }, 00:27:16.936 "method": "bdev_nvme_attach_controller" 00:27:16.936 } 00:27:16.936 EOF 00:27:16.936 )") 00:27:16.936 10:23:50 -- nvmf/common.sh@542 -- # cat 00:27:16.936 10:23:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:16.936 10:23:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:16.936 { 00:27:16.936 "params": { 00:27:16.936 "name": "Nvme$subsystem", 00:27:16.936 "trtype": "$TEST_TRANSPORT", 00:27:16.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.936 "adrfam": "ipv4", 00:27:16.936 "trsvcid": "$NVMF_PORT", 00:27:16.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.936 "hdgst": ${hdgst:-false}, 00:27:16.936 "ddgst": ${ddgst:-false} 00:27:16.936 }, 00:27:16.936 "method": "bdev_nvme_attach_controller" 00:27:16.936 } 00:27:16.936 EOF 00:27:16.936 )") 00:27:16.936 10:23:50 -- nvmf/common.sh@542 -- # cat 00:27:16.936 10:23:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:16.936 10:23:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:16.936 { 00:27:16.936 "params": { 00:27:16.936 "name": "Nvme$subsystem", 00:27:16.936 "trtype": "$TEST_TRANSPORT", 00:27:16.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.936 "adrfam": "ipv4", 00:27:16.936 "trsvcid": "$NVMF_PORT", 00:27:16.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.936 "hdgst": ${hdgst:-false}, 00:27:16.936 "ddgst": ${ddgst:-false} 00:27:16.936 }, 00:27:16.936 "method": "bdev_nvme_attach_controller" 00:27:16.936 } 00:27:16.936 EOF 00:27:16.936 )") 00:27:16.936 10:23:50 -- nvmf/common.sh@542 -- # cat 00:27:16.936 10:23:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:16.936 10:23:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:16.936 { 00:27:16.936 "params": { 00:27:16.936 "name": "Nvme$subsystem", 00:27:16.936 "trtype": "$TEST_TRANSPORT", 00:27:16.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.936 "adrfam": "ipv4", 00:27:16.936 "trsvcid": "$NVMF_PORT", 00:27:16.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.936 "hdgst": ${hdgst:-false}, 00:27:16.936 "ddgst": ${ddgst:-false} 00:27:16.936 }, 00:27:16.936 "method": "bdev_nvme_attach_controller" 00:27:16.936 } 00:27:16.936 EOF 00:27:16.936 )") 00:27:16.936 10:23:50 -- nvmf/common.sh@542 -- # cat 00:27:16.936 10:23:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:16.936 10:23:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:16.936 { 00:27:16.936 "params": { 00:27:16.936 "name": "Nvme$subsystem", 00:27:16.936 "trtype": "$TEST_TRANSPORT", 00:27:16.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.936 "adrfam": "ipv4", 00:27:16.936 "trsvcid": "$NVMF_PORT", 00:27:16.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.936 "hdgst": ${hdgst:-false}, 00:27:16.936 "ddgst": ${ddgst:-false} 00:27:16.936 }, 00:27:16.936 "method": "bdev_nvme_attach_controller" 00:27:16.936 } 00:27:16.936 EOF 00:27:16.936 )") 00:27:16.936 10:23:50 -- nvmf/common.sh@542 -- # cat 00:27:16.936 10:23:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:16.936 10:23:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:16.936 { 00:27:16.936 "params": { 00:27:16.936 "name": "Nvme$subsystem", 00:27:16.936 "trtype": "$TEST_TRANSPORT", 00:27:16.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.936 "adrfam": "ipv4", 00:27:16.936 "trsvcid": "$NVMF_PORT", 00:27:16.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.936 "hdgst": ${hdgst:-false}, 00:27:16.936 "ddgst": ${ddgst:-false} 00:27:16.936 }, 00:27:16.936 "method": "bdev_nvme_attach_controller" 00:27:16.936 } 00:27:16.936 EOF 00:27:16.936 )") 00:27:16.936 10:23:50 -- nvmf/common.sh@542 -- # cat 00:27:16.936 [2024-04-17 10:23:50.165447] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:16.936 [2024-04-17 10:23:50.165506] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:16.936 10:23:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:16.936 10:23:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:16.936 { 00:27:16.936 "params": { 00:27:16.936 "name": "Nvme$subsystem", 00:27:16.936 "trtype": "$TEST_TRANSPORT", 00:27:16.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.936 "adrfam": "ipv4", 00:27:16.936 "trsvcid": "$NVMF_PORT", 00:27:16.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.936 "hdgst": ${hdgst:-false}, 00:27:16.936 "ddgst": ${ddgst:-false} 00:27:16.936 }, 00:27:16.936 "method": "bdev_nvme_attach_controller" 00:27:16.936 } 00:27:16.936 EOF 00:27:16.936 )") 00:27:16.936 10:23:50 -- nvmf/common.sh@542 -- # cat 00:27:16.936 10:23:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:16.936 10:23:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:16.936 { 00:27:16.936 "params": { 00:27:16.936 "name": "Nvme$subsystem", 00:27:16.936 "trtype": "$TEST_TRANSPORT", 00:27:16.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.936 "adrfam": "ipv4", 00:27:16.936 "trsvcid": "$NVMF_PORT", 00:27:16.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.936 "hdgst": ${hdgst:-false}, 00:27:16.936 "ddgst": ${ddgst:-false} 00:27:16.936 }, 00:27:16.936 "method": "bdev_nvme_attach_controller" 00:27:16.936 } 00:27:16.936 EOF 00:27:16.936 )") 00:27:16.936 10:23:50 -- nvmf/common.sh@542 -- # cat 00:27:16.936 10:23:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:16.936 10:23:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:16.936 { 00:27:16.936 "params": { 00:27:16.936 "name": "Nvme$subsystem", 00:27:16.936 "trtype": "$TEST_TRANSPORT", 00:27:16.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.936 "adrfam": "ipv4", 00:27:16.936 "trsvcid": "$NVMF_PORT", 00:27:16.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.936 "hdgst": ${hdgst:-false}, 00:27:16.936 "ddgst": ${ddgst:-false} 00:27:16.936 }, 00:27:16.936 "method": "bdev_nvme_attach_controller" 00:27:16.936 } 00:27:16.936 EOF 00:27:16.936 )") 00:27:16.936 10:23:50 -- nvmf/common.sh@542 -- # cat 00:27:16.936 10:23:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:16.936 10:23:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:16.936 { 00:27:16.936 "params": { 00:27:16.936 "name": "Nvme$subsystem", 00:27:16.936 "trtype": "$TEST_TRANSPORT", 00:27:16.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.936 "adrfam": "ipv4", 00:27:16.936 "trsvcid": "$NVMF_PORT", 00:27:16.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.936 "hdgst": ${hdgst:-false}, 00:27:16.936 "ddgst": ${ddgst:-false} 00:27:16.936 }, 00:27:16.936 "method": "bdev_nvme_attach_controller" 00:27:16.936 } 00:27:16.936 EOF 00:27:16.936 )") 00:27:16.936 10:23:50 -- nvmf/common.sh@542 -- # cat 00:27:16.936 10:23:50 -- nvmf/common.sh@544 -- # jq . 00:27:16.936 10:23:50 -- nvmf/common.sh@545 -- # IFS=, 00:27:16.936 10:23:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:16.936 "params": { 00:27:16.937 "name": "Nvme1", 00:27:16.937 "trtype": "tcp", 00:27:16.937 "traddr": "10.0.0.2", 00:27:16.937 "adrfam": "ipv4", 00:27:16.937 "trsvcid": "4420", 00:27:16.937 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:16.937 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:16.937 "hdgst": false, 00:27:16.937 "ddgst": false 00:27:16.937 }, 00:27:16.937 "method": "bdev_nvme_attach_controller" 00:27:16.937 },{ 00:27:16.937 "params": { 00:27:16.937 "name": "Nvme2", 00:27:16.937 "trtype": "tcp", 00:27:16.937 "traddr": "10.0.0.2", 00:27:16.937 "adrfam": "ipv4", 00:27:16.937 "trsvcid": "4420", 00:27:16.937 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:16.937 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:16.937 "hdgst": false, 00:27:16.937 "ddgst": false 00:27:16.937 }, 00:27:16.937 "method": "bdev_nvme_attach_controller" 00:27:16.937 },{ 00:27:16.937 "params": { 00:27:16.937 "name": "Nvme3", 00:27:16.937 "trtype": "tcp", 00:27:16.937 "traddr": "10.0.0.2", 00:27:16.937 "adrfam": "ipv4", 00:27:16.937 "trsvcid": "4420", 00:27:16.937 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:16.937 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:16.937 "hdgst": false, 00:27:16.937 "ddgst": false 00:27:16.937 }, 00:27:16.937 "method": "bdev_nvme_attach_controller" 00:27:16.937 },{ 00:27:16.937 "params": { 00:27:16.937 "name": "Nvme4", 00:27:16.937 "trtype": "tcp", 00:27:16.937 "traddr": "10.0.0.2", 00:27:16.937 "adrfam": "ipv4", 00:27:16.937 "trsvcid": "4420", 00:27:16.937 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:16.937 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:16.937 "hdgst": false, 00:27:16.937 "ddgst": false 00:27:16.937 }, 00:27:16.937 "method": "bdev_nvme_attach_controller" 00:27:16.937 },{ 00:27:16.937 "params": { 00:27:16.937 "name": "Nvme5", 00:27:16.937 "trtype": "tcp", 00:27:16.937 "traddr": "10.0.0.2", 00:27:16.937 "adrfam": "ipv4", 00:27:16.937 "trsvcid": "4420", 00:27:16.937 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:16.937 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:16.937 "hdgst": false, 00:27:16.937 "ddgst": false 00:27:16.937 }, 00:27:16.937 "method": "bdev_nvme_attach_controller" 00:27:16.937 },{ 00:27:16.937 "params": { 00:27:16.937 "name": "Nvme6", 00:27:16.937 "trtype": "tcp", 00:27:16.937 "traddr": "10.0.0.2", 00:27:16.937 "adrfam": "ipv4", 00:27:16.937 "trsvcid": "4420", 00:27:16.937 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:16.937 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:16.937 "hdgst": false, 00:27:16.937 "ddgst": false 00:27:16.937 }, 00:27:16.937 "method": "bdev_nvme_attach_controller" 00:27:16.937 },{ 00:27:16.937 "params": { 00:27:16.937 "name": "Nvme7", 00:27:16.937 "trtype": "tcp", 00:27:16.937 "traddr": "10.0.0.2", 00:27:16.937 "adrfam": "ipv4", 00:27:16.937 "trsvcid": "4420", 00:27:16.937 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:16.937 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:16.937 "hdgst": false, 00:27:16.937 "ddgst": false 00:27:16.937 }, 00:27:16.937 "method": "bdev_nvme_attach_controller" 00:27:16.937 },{ 00:27:16.937 "params": { 00:27:16.937 "name": "Nvme8", 00:27:16.937 "trtype": "tcp", 00:27:16.937 "traddr": "10.0.0.2", 00:27:16.937 "adrfam": "ipv4", 00:27:16.937 "trsvcid": "4420", 00:27:16.937 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:16.937 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:16.937 "hdgst": false, 00:27:16.937 "ddgst": false 00:27:16.937 }, 00:27:16.937 "method": "bdev_nvme_attach_controller" 00:27:16.937 },{ 00:27:16.937 "params": { 00:27:16.937 "name": "Nvme9", 00:27:16.937 "trtype": "tcp", 00:27:16.937 "traddr": "10.0.0.2", 00:27:16.937 "adrfam": "ipv4", 00:27:16.937 "trsvcid": "4420", 00:27:16.937 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:16.937 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:16.937 "hdgst": false, 00:27:16.937 "ddgst": false 00:27:16.937 }, 00:27:16.937 "method": "bdev_nvme_attach_controller" 00:27:16.937 },{ 00:27:16.937 "params": { 00:27:16.937 "name": "Nvme10", 00:27:16.937 "trtype": "tcp", 00:27:16.937 "traddr": "10.0.0.2", 00:27:16.937 "adrfam": "ipv4", 00:27:16.937 "trsvcid": "4420", 00:27:16.937 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:16.937 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:16.937 "hdgst": false, 00:27:16.937 "ddgst": false 00:27:16.937 }, 00:27:16.937 "method": "bdev_nvme_attach_controller" 00:27:16.937 }' 00:27:16.937 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.937 [2024-04-17 10:23:50.247234] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.196 [2024-04-17 10:23:50.331999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.573 10:23:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:18.573 10:23:51 -- common/autotest_common.sh@852 -- # return 0 00:27:18.573 10:23:51 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:18.573 10:23:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.573 10:23:51 -- common/autotest_common.sh@10 -- # set +x 00:27:18.573 10:23:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.573 10:23:51 -- target/shutdown.sh@83 -- # kill -9 3564872 00:27:18.573 10:23:51 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:18.573 10:23:51 -- target/shutdown.sh@87 -- # sleep 1 00:27:19.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3564872 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:19.949 10:23:52 -- target/shutdown.sh@88 -- # kill -0 3564553 00:27:19.949 10:23:52 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:19.949 10:23:52 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:19.949 10:23:52 -- nvmf/common.sh@520 -- # config=() 00:27:19.949 10:23:52 -- nvmf/common.sh@520 -- # local subsystem config 00:27:19.949 10:23:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:19.949 10:23:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:19.949 { 00:27:19.949 "params": { 00:27:19.949 "name": "Nvme$subsystem", 00:27:19.949 "trtype": "$TEST_TRANSPORT", 00:27:19.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.949 "adrfam": "ipv4", 00:27:19.949 "trsvcid": "$NVMF_PORT", 00:27:19.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.949 "hdgst": ${hdgst:-false}, 00:27:19.949 "ddgst": ${ddgst:-false} 00:27:19.949 }, 00:27:19.949 "method": "bdev_nvme_attach_controller" 00:27:19.949 } 00:27:19.949 EOF 00:27:19.949 )") 00:27:19.949 10:23:52 -- nvmf/common.sh@542 -- # cat 00:27:19.949 10:23:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:19.949 10:23:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:19.949 { 00:27:19.949 "params": { 00:27:19.949 "name": "Nvme$subsystem", 00:27:19.949 "trtype": "$TEST_TRANSPORT", 00:27:19.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.949 "adrfam": "ipv4", 00:27:19.949 "trsvcid": "$NVMF_PORT", 00:27:19.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.949 "hdgst": ${hdgst:-false}, 00:27:19.949 "ddgst": ${ddgst:-false} 00:27:19.949 }, 00:27:19.949 "method": "bdev_nvme_attach_controller" 00:27:19.949 } 00:27:19.949 EOF 00:27:19.949 )") 00:27:19.949 10:23:52 -- nvmf/common.sh@542 -- # cat 00:27:19.949 10:23:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:19.949 10:23:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:19.949 { 00:27:19.949 "params": { 00:27:19.949 "name": "Nvme$subsystem", 00:27:19.949 "trtype": "$TEST_TRANSPORT", 00:27:19.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.949 "adrfam": "ipv4", 00:27:19.949 "trsvcid": "$NVMF_PORT", 00:27:19.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.949 "hdgst": ${hdgst:-false}, 00:27:19.950 "ddgst": ${ddgst:-false} 00:27:19.950 }, 00:27:19.950 "method": "bdev_nvme_attach_controller" 00:27:19.950 } 00:27:19.950 EOF 00:27:19.950 )") 00:27:19.950 10:23:52 -- nvmf/common.sh@542 -- # cat 00:27:19.950 10:23:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:19.950 10:23:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:19.950 { 00:27:19.950 "params": { 00:27:19.950 "name": "Nvme$subsystem", 00:27:19.950 "trtype": "$TEST_TRANSPORT", 00:27:19.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.950 "adrfam": "ipv4", 00:27:19.950 "trsvcid": "$NVMF_PORT", 00:27:19.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.950 "hdgst": ${hdgst:-false}, 00:27:19.950 "ddgst": ${ddgst:-false} 00:27:19.950 }, 00:27:19.950 "method": "bdev_nvme_attach_controller" 00:27:19.950 } 00:27:19.950 EOF 00:27:19.950 )") 00:27:19.950 10:23:52 -- nvmf/common.sh@542 -- # cat 00:27:19.950 10:23:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:19.950 10:23:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:19.950 { 00:27:19.950 "params": { 00:27:19.950 "name": "Nvme$subsystem", 00:27:19.950 "trtype": "$TEST_TRANSPORT", 00:27:19.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.950 "adrfam": "ipv4", 00:27:19.950 "trsvcid": "$NVMF_PORT", 00:27:19.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.950 "hdgst": ${hdgst:-false}, 00:27:19.950 "ddgst": ${ddgst:-false} 00:27:19.950 }, 00:27:19.950 "method": "bdev_nvme_attach_controller" 00:27:19.950 } 00:27:19.950 EOF 00:27:19.950 )") 00:27:19.950 10:23:52 -- nvmf/common.sh@542 -- # cat 00:27:19.950 10:23:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:19.950 10:23:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:19.950 { 00:27:19.950 "params": { 00:27:19.950 "name": "Nvme$subsystem", 00:27:19.950 "trtype": "$TEST_TRANSPORT", 00:27:19.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.950 "adrfam": "ipv4", 00:27:19.950 "trsvcid": "$NVMF_PORT", 00:27:19.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.950 "hdgst": ${hdgst:-false}, 00:27:19.950 "ddgst": ${ddgst:-false} 00:27:19.950 }, 00:27:19.950 "method": "bdev_nvme_attach_controller" 00:27:19.950 } 00:27:19.950 EOF 00:27:19.950 )") 00:27:19.950 10:23:52 -- nvmf/common.sh@542 -- # cat 00:27:19.950 10:23:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:19.950 10:23:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:19.950 { 00:27:19.950 "params": { 00:27:19.950 "name": "Nvme$subsystem", 00:27:19.950 "trtype": "$TEST_TRANSPORT", 00:27:19.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.950 "adrfam": "ipv4", 00:27:19.950 "trsvcid": "$NVMF_PORT", 00:27:19.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.950 "hdgst": ${hdgst:-false}, 00:27:19.950 "ddgst": ${ddgst:-false} 00:27:19.950 }, 00:27:19.950 "method": "bdev_nvme_attach_controller" 00:27:19.950 } 00:27:19.950 EOF 00:27:19.950 )") 00:27:19.950 10:23:52 -- nvmf/common.sh@542 -- # cat 00:27:19.950 [2024-04-17 10:23:52.906939] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:19.950 [2024-04-17 10:23:52.907002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3565428 ] 00:27:19.950 10:23:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:19.950 10:23:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:19.950 { 00:27:19.950 "params": { 00:27:19.950 "name": "Nvme$subsystem", 00:27:19.950 "trtype": "$TEST_TRANSPORT", 00:27:19.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.950 "adrfam": "ipv4", 00:27:19.950 "trsvcid": "$NVMF_PORT", 00:27:19.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.950 "hdgst": ${hdgst:-false}, 00:27:19.950 "ddgst": ${ddgst:-false} 00:27:19.950 }, 00:27:19.950 "method": "bdev_nvme_attach_controller" 00:27:19.950 } 00:27:19.950 EOF 00:27:19.950 )") 00:27:19.950 10:23:52 -- nvmf/common.sh@542 -- # cat 00:27:19.950 10:23:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:19.950 10:23:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:19.950 { 00:27:19.950 "params": { 00:27:19.950 "name": "Nvme$subsystem", 00:27:19.950 "trtype": "$TEST_TRANSPORT", 00:27:19.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.950 "adrfam": "ipv4", 00:27:19.950 "trsvcid": "$NVMF_PORT", 00:27:19.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.950 "hdgst": ${hdgst:-false}, 00:27:19.950 "ddgst": ${ddgst:-false} 00:27:19.950 }, 00:27:19.950 "method": "bdev_nvme_attach_controller" 00:27:19.950 } 00:27:19.950 EOF 00:27:19.950 )") 00:27:19.950 10:23:52 -- nvmf/common.sh@542 -- # cat 00:27:19.950 10:23:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:19.950 10:23:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:19.950 { 00:27:19.950 "params": { 00:27:19.950 "name": "Nvme$subsystem", 00:27:19.950 "trtype": "$TEST_TRANSPORT", 00:27:19.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.950 "adrfam": "ipv4", 00:27:19.950 "trsvcid": "$NVMF_PORT", 00:27:19.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.950 "hdgst": ${hdgst:-false}, 00:27:19.950 "ddgst": ${ddgst:-false} 00:27:19.950 }, 00:27:19.950 "method": "bdev_nvme_attach_controller" 00:27:19.950 } 00:27:19.950 EOF 00:27:19.950 )") 00:27:19.950 10:23:52 -- nvmf/common.sh@542 -- # cat 00:27:19.950 10:23:52 -- nvmf/common.sh@544 -- # jq . 00:27:19.950 10:23:52 -- nvmf/common.sh@545 -- # IFS=, 00:27:19.950 10:23:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:19.950 "params": { 00:27:19.950 "name": "Nvme1", 00:27:19.950 "trtype": "tcp", 00:27:19.950 "traddr": "10.0.0.2", 00:27:19.950 "adrfam": "ipv4", 00:27:19.950 "trsvcid": "4420", 00:27:19.950 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:19.950 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:19.950 "hdgst": false, 00:27:19.950 "ddgst": false 00:27:19.950 }, 00:27:19.950 "method": "bdev_nvme_attach_controller" 00:27:19.950 },{ 00:27:19.950 "params": { 00:27:19.950 "name": "Nvme2", 00:27:19.950 "trtype": "tcp", 00:27:19.950 "traddr": "10.0.0.2", 00:27:19.950 "adrfam": "ipv4", 00:27:19.950 "trsvcid": "4420", 00:27:19.950 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:19.950 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:19.950 "hdgst": false, 00:27:19.950 "ddgst": false 00:27:19.950 }, 00:27:19.950 "method": "bdev_nvme_attach_controller" 00:27:19.950 },{ 00:27:19.950 "params": { 00:27:19.950 "name": "Nvme3", 00:27:19.950 "trtype": "tcp", 00:27:19.950 "traddr": "10.0.0.2", 00:27:19.950 "adrfam": "ipv4", 00:27:19.950 "trsvcid": "4420", 00:27:19.950 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:19.950 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:19.950 "hdgst": false, 00:27:19.950 "ddgst": false 00:27:19.950 }, 00:27:19.950 "method": "bdev_nvme_attach_controller" 00:27:19.950 },{ 00:27:19.950 "params": { 00:27:19.950 "name": "Nvme4", 00:27:19.950 "trtype": "tcp", 00:27:19.950 "traddr": "10.0.0.2", 00:27:19.950 "adrfam": "ipv4", 00:27:19.950 "trsvcid": "4420", 00:27:19.950 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:19.950 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:19.950 "hdgst": false, 00:27:19.950 "ddgst": false 00:27:19.950 }, 00:27:19.950 "method": "bdev_nvme_attach_controller" 00:27:19.950 },{ 00:27:19.950 "params": { 00:27:19.950 "name": "Nvme5", 00:27:19.950 "trtype": "tcp", 00:27:19.950 "traddr": "10.0.0.2", 00:27:19.950 "adrfam": "ipv4", 00:27:19.950 "trsvcid": "4420", 00:27:19.950 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:19.950 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:19.950 "hdgst": false, 00:27:19.950 "ddgst": false 00:27:19.950 }, 00:27:19.950 "method": "bdev_nvme_attach_controller" 00:27:19.950 },{ 00:27:19.950 "params": { 00:27:19.950 "name": "Nvme6", 00:27:19.950 "trtype": "tcp", 00:27:19.950 "traddr": "10.0.0.2", 00:27:19.950 "adrfam": "ipv4", 00:27:19.950 "trsvcid": "4420", 00:27:19.950 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:19.950 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:19.950 "hdgst": false, 00:27:19.950 "ddgst": false 00:27:19.950 }, 00:27:19.950 "method": "bdev_nvme_attach_controller" 00:27:19.950 },{ 00:27:19.950 "params": { 00:27:19.950 "name": "Nvme7", 00:27:19.950 "trtype": "tcp", 00:27:19.950 "traddr": "10.0.0.2", 00:27:19.950 "adrfam": "ipv4", 00:27:19.950 "trsvcid": "4420", 00:27:19.950 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:19.950 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:19.950 "hdgst": false, 00:27:19.950 "ddgst": false 00:27:19.950 }, 00:27:19.950 "method": "bdev_nvme_attach_controller" 00:27:19.950 },{ 00:27:19.950 "params": { 00:27:19.950 "name": "Nvme8", 00:27:19.950 "trtype": "tcp", 00:27:19.950 "traddr": "10.0.0.2", 00:27:19.951 "adrfam": "ipv4", 00:27:19.951 "trsvcid": "4420", 00:27:19.951 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:19.951 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:19.951 "hdgst": false, 00:27:19.951 "ddgst": false 00:27:19.951 }, 00:27:19.951 "method": "bdev_nvme_attach_controller" 00:27:19.951 },{ 00:27:19.951 "params": { 00:27:19.951 "name": "Nvme9", 00:27:19.951 "trtype": "tcp", 00:27:19.951 "traddr": "10.0.0.2", 00:27:19.951 "adrfam": "ipv4", 00:27:19.951 "trsvcid": "4420", 00:27:19.951 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:19.951 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:19.951 "hdgst": false, 00:27:19.951 "ddgst": false 00:27:19.951 }, 00:27:19.951 "method": "bdev_nvme_attach_controller" 00:27:19.951 },{ 00:27:19.951 "params": { 00:27:19.951 "name": "Nvme10", 00:27:19.951 "trtype": "tcp", 00:27:19.951 "traddr": "10.0.0.2", 00:27:19.951 "adrfam": "ipv4", 00:27:19.951 "trsvcid": "4420", 00:27:19.951 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:19.951 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:19.951 "hdgst": false, 00:27:19.951 "ddgst": false 00:27:19.951 }, 00:27:19.951 "method": "bdev_nvme_attach_controller" 00:27:19.951 }' 00:27:19.951 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.951 [2024-04-17 10:23:52.991130] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.951 [2024-04-17 10:23:53.075411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.327 Running I/O for 1 seconds... 00:27:22.267 00:27:22.267 Latency(us) 00:27:22.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:22.267 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:22.267 Verification LBA range: start 0x0 length 0x400 00:27:22.267 Nvme1n1 : 1.14 350.60 21.91 0.00 0.00 178901.01 28359.21 146800.64 00:27:22.267 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:22.267 Verification LBA range: start 0x0 length 0x400 00:27:22.267 Nvme2n1 : 1.13 315.42 19.71 0.00 0.00 196752.16 26214.40 163959.16 00:27:22.267 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:22.267 Verification LBA range: start 0x0 length 0x400 00:27:22.267 Nvme3n1 : 1.14 350.09 21.88 0.00 0.00 175720.91 31695.59 140127.88 00:27:22.267 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:22.267 Verification LBA range: start 0x0 length 0x400 00:27:22.267 Nvme4n1 : 1.14 348.61 21.79 0.00 0.00 174894.60 32172.22 138221.38 00:27:22.267 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:22.267 Verification LBA range: start 0x0 length 0x400 00:27:22.267 Nvme5n1 : 1.14 349.47 21.84 0.00 0.00 173243.77 27048.49 137268.13 00:27:22.267 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:22.267 Verification LBA range: start 0x0 length 0x400 00:27:22.267 Nvme6n1 : 1.12 316.56 19.78 0.00 0.00 189099.64 25261.15 154426.65 00:27:22.267 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:22.267 Verification LBA range: start 0x0 length 0x400 00:27:22.267 Nvme7n1 : 1.15 347.54 21.72 0.00 0.00 170888.19 30504.03 139174.63 00:27:22.267 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:22.267 Verification LBA range: start 0x0 length 0x400 00:27:22.267 Nvme8n1 : 1.15 346.87 21.68 0.00 0.00 169714.59 30027.40 142987.64 00:27:22.267 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:22.267 Verification LBA range: start 0x0 length 0x400 00:27:22.267 Nvme9n1 : 1.15 346.32 21.65 0.00 0.00 168469.43 29431.62 145847.39 00:27:22.267 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:22.267 Verification LBA range: start 0x0 length 0x400 00:27:22.267 Nvme10n1 : 1.13 316.70 19.79 0.00 0.00 182851.91 12153.95 144894.14 00:27:22.267 =================================================================================================================== 00:27:22.267 Total : 3388.19 211.76 0.00 0.00 177673.30 12153.95 163959.16 00:27:22.528 10:23:55 -- target/shutdown.sh@93 -- # stoptarget 00:27:22.528 10:23:55 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:22.528 10:23:55 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:22.528 10:23:55 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:22.528 10:23:55 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:22.528 10:23:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:22.528 10:23:55 -- nvmf/common.sh@116 -- # sync 00:27:22.528 10:23:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:22.528 10:23:55 -- nvmf/common.sh@119 -- # set +e 00:27:22.528 10:23:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:22.528 10:23:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:22.528 rmmod nvme_tcp 00:27:22.528 rmmod nvme_fabrics 00:27:22.528 rmmod nvme_keyring 00:27:22.528 10:23:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:22.528 10:23:55 -- nvmf/common.sh@123 -- # set -e 00:27:22.528 10:23:55 -- nvmf/common.sh@124 -- # return 0 00:27:22.528 10:23:55 -- nvmf/common.sh@477 -- # '[' -n 3564553 ']' 00:27:22.528 10:23:55 -- nvmf/common.sh@478 -- # killprocess 3564553 00:27:22.528 10:23:55 -- common/autotest_common.sh@926 -- # '[' -z 3564553 ']' 00:27:22.528 10:23:55 -- common/autotest_common.sh@930 -- # kill -0 3564553 00:27:22.528 10:23:55 -- common/autotest_common.sh@931 -- # uname 00:27:22.528 10:23:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:22.528 10:23:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3564553 00:27:22.528 10:23:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:22.528 10:23:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:22.528 10:23:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3564553' 00:27:22.528 killing process with pid 3564553 00:27:22.528 10:23:55 -- common/autotest_common.sh@945 -- # kill 3564553 00:27:22.528 10:23:55 -- common/autotest_common.sh@950 -- # wait 3564553 00:27:23.096 10:23:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:23.096 10:23:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:23.096 10:23:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:23.096 10:23:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:23.096 10:23:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:23.096 10:23:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.096 10:23:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:23.096 10:23:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.000 10:23:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:25.000 00:27:25.000 real 0m15.219s 00:27:25.000 user 0m35.448s 00:27:25.000 sys 0m5.583s 00:27:25.000 10:23:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:25.000 10:23:58 -- common/autotest_common.sh@10 -- # set +x 00:27:25.000 ************************************ 00:27:25.000 END TEST nvmf_shutdown_tc1 00:27:25.000 ************************************ 00:27:25.259 10:23:58 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:25.259 10:23:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:25.259 10:23:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:25.259 10:23:58 -- common/autotest_common.sh@10 -- # set +x 00:27:25.259 ************************************ 00:27:25.259 START TEST nvmf_shutdown_tc2 00:27:25.259 ************************************ 00:27:25.259 10:23:58 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:27:25.259 10:23:58 -- target/shutdown.sh@98 -- # starttarget 00:27:25.259 10:23:58 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:25.259 10:23:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:25.259 10:23:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:25.259 10:23:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:25.259 10:23:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:25.259 10:23:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:25.259 10:23:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.259 10:23:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:25.259 10:23:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.259 10:23:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:25.259 10:23:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:25.259 10:23:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:25.259 10:23:58 -- common/autotest_common.sh@10 -- # set +x 00:27:25.260 10:23:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:25.260 10:23:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:25.260 10:23:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:25.260 10:23:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:25.260 10:23:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:25.260 10:23:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:25.260 10:23:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:25.260 10:23:58 -- nvmf/common.sh@294 -- # net_devs=() 00:27:25.260 10:23:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:25.260 10:23:58 -- nvmf/common.sh@295 -- # e810=() 00:27:25.260 10:23:58 -- nvmf/common.sh@295 -- # local -ga e810 00:27:25.260 10:23:58 -- nvmf/common.sh@296 -- # x722=() 00:27:25.260 10:23:58 -- nvmf/common.sh@296 -- # local -ga x722 00:27:25.260 10:23:58 -- nvmf/common.sh@297 -- # mlx=() 00:27:25.260 10:23:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:25.260 10:23:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:25.260 10:23:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:25.260 10:23:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:25.260 10:23:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:25.260 10:23:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:25.260 10:23:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:25.260 10:23:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:25.260 10:23:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:25.260 10:23:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:25.260 10:23:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:25.260 10:23:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:25.260 10:23:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:25.260 10:23:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:25.260 10:23:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:25.260 10:23:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:25.260 10:23:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:25.260 10:23:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:25.260 10:23:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:25.260 10:23:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:25.260 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:25.260 10:23:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:25.260 10:23:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:25.260 10:23:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.260 10:23:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.260 10:23:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:25.260 10:23:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:25.260 10:23:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:25.260 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:25.260 10:23:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:25.260 10:23:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:25.260 10:23:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.260 10:23:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.260 10:23:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:25.260 10:23:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:25.260 10:23:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:25.260 10:23:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:25.260 10:23:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:25.260 10:23:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.260 10:23:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:25.260 10:23:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.260 10:23:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:25.260 Found net devices under 0000:af:00.0: cvl_0_0 00:27:25.260 10:23:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.260 10:23:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:25.260 10:23:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.260 10:23:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:25.260 10:23:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.260 10:23:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:25.260 Found net devices under 0000:af:00.1: cvl_0_1 00:27:25.260 10:23:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.260 10:23:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:25.260 10:23:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:25.260 10:23:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:25.260 10:23:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:25.260 10:23:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:25.260 10:23:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:25.260 10:23:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:25.260 10:23:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:25.260 10:23:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:25.260 10:23:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:25.260 10:23:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:25.260 10:23:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:25.260 10:23:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:25.260 10:23:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:25.260 10:23:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:25.260 10:23:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:25.260 10:23:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:25.260 10:23:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:25.260 10:23:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:25.260 10:23:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:25.260 10:23:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:25.260 10:23:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:25.520 10:23:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:25.520 10:23:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:25.520 10:23:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:25.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:25.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:27:25.520 00:27:25.520 --- 10.0.0.2 ping statistics --- 00:27:25.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.520 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:27:25.520 10:23:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:25.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:25.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:27:25.520 00:27:25.520 --- 10.0.0.1 ping statistics --- 00:27:25.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.520 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:27:25.520 10:23:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:25.520 10:23:58 -- nvmf/common.sh@410 -- # return 0 00:27:25.520 10:23:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:25.520 10:23:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:25.520 10:23:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:25.520 10:23:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:25.520 10:23:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:25.520 10:23:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:25.520 10:23:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:25.520 10:23:58 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:25.520 10:23:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:25.520 10:23:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:25.520 10:23:58 -- common/autotest_common.sh@10 -- # set +x 00:27:25.520 10:23:58 -- nvmf/common.sh@469 -- # nvmfpid=3566583 00:27:25.520 10:23:58 -- nvmf/common.sh@470 -- # waitforlisten 3566583 00:27:25.520 10:23:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:25.520 10:23:58 -- common/autotest_common.sh@819 -- # '[' -z 3566583 ']' 00:27:25.520 10:23:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.520 10:23:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:25.520 10:23:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.520 10:23:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:25.520 10:23:58 -- common/autotest_common.sh@10 -- # set +x 00:27:25.520 [2024-04-17 10:23:58.725194] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:25.520 [2024-04-17 10:23:58.725247] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:25.520 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.520 [2024-04-17 10:23:58.803572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:25.779 [2024-04-17 10:23:58.891527] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:25.779 [2024-04-17 10:23:58.891677] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:25.779 [2024-04-17 10:23:58.891689] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:25.779 [2024-04-17 10:23:58.891699] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:25.779 [2024-04-17 10:23:58.891802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:25.779 [2024-04-17 10:23:58.891919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:25.779 [2024-04-17 10:23:58.892032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.779 [2024-04-17 10:23:58.892032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:26.378 10:23:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:26.378 10:23:59 -- common/autotest_common.sh@852 -- # return 0 00:27:26.378 10:23:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:26.378 10:23:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:26.378 10:23:59 -- common/autotest_common.sh@10 -- # set +x 00:27:26.661 10:23:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:26.661 10:23:59 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:26.661 10:23:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.661 10:23:59 -- common/autotest_common.sh@10 -- # set +x 00:27:26.661 [2024-04-17 10:23:59.704463] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:26.661 10:23:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.661 10:23:59 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:26.661 10:23:59 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:26.661 10:23:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:26.661 10:23:59 -- common/autotest_common.sh@10 -- # set +x 00:27:26.661 10:23:59 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:26.661 10:23:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.661 10:23:59 -- target/shutdown.sh@28 -- # cat 00:27:26.661 10:23:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.661 10:23:59 -- target/shutdown.sh@28 -- # cat 00:27:26.661 10:23:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.661 10:23:59 -- target/shutdown.sh@28 -- # cat 00:27:26.661 10:23:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.661 10:23:59 -- target/shutdown.sh@28 -- # cat 00:27:26.661 10:23:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.661 10:23:59 -- target/shutdown.sh@28 -- # cat 00:27:26.661 10:23:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.661 10:23:59 -- target/shutdown.sh@28 -- # cat 00:27:26.661 10:23:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.661 10:23:59 -- target/shutdown.sh@28 -- # cat 00:27:26.661 10:23:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.661 10:23:59 -- target/shutdown.sh@28 -- # cat 00:27:26.661 10:23:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.661 10:23:59 -- target/shutdown.sh@28 -- # cat 00:27:26.661 10:23:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:26.661 10:23:59 -- target/shutdown.sh@28 -- # cat 00:27:26.661 10:23:59 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:26.661 10:23:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:26.661 10:23:59 -- common/autotest_common.sh@10 -- # set +x 00:27:26.661 Malloc1 00:27:26.661 [2024-04-17 10:23:59.804516] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:26.661 Malloc2 00:27:26.661 Malloc3 00:27:26.661 Malloc4 00:27:26.661 Malloc5 00:27:26.933 Malloc6 00:27:26.933 Malloc7 00:27:26.933 Malloc8 00:27:26.933 Malloc9 00:27:26.933 Malloc10 00:27:26.933 10:24:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:26.933 10:24:00 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:26.933 10:24:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:26.933 10:24:00 -- common/autotest_common.sh@10 -- # set +x 00:27:26.933 10:24:00 -- target/shutdown.sh@102 -- # perfpid=3566938 00:27:26.933 10:24:00 -- target/shutdown.sh@103 -- # waitforlisten 3566938 /var/tmp/bdevperf.sock 00:27:26.933 10:24:00 -- common/autotest_common.sh@819 -- # '[' -z 3566938 ']' 00:27:26.933 10:24:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:26.933 10:24:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:26.933 10:24:00 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:26.933 10:24:00 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:26.933 10:24:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:26.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:26.933 10:24:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:26.933 10:24:00 -- nvmf/common.sh@520 -- # config=() 00:27:26.933 10:24:00 -- common/autotest_common.sh@10 -- # set +x 00:27:26.933 10:24:00 -- nvmf/common.sh@520 -- # local subsystem config 00:27:26.933 10:24:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:26.933 10:24:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:26.933 { 00:27:26.933 "params": { 00:27:26.933 "name": "Nvme$subsystem", 00:27:26.933 "trtype": "$TEST_TRANSPORT", 00:27:26.933 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.933 "adrfam": "ipv4", 00:27:26.933 "trsvcid": "$NVMF_PORT", 00:27:26.933 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.933 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.933 "hdgst": ${hdgst:-false}, 00:27:26.933 "ddgst": ${ddgst:-false} 00:27:26.933 }, 00:27:26.933 "method": "bdev_nvme_attach_controller" 00:27:26.934 } 00:27:26.934 EOF 00:27:26.934 )") 00:27:26.934 10:24:00 -- nvmf/common.sh@542 -- # cat 00:27:26.934 10:24:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:26.934 10:24:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:26.934 { 00:27:26.934 "params": { 00:27:26.934 "name": "Nvme$subsystem", 00:27:26.934 "trtype": "$TEST_TRANSPORT", 00:27:26.934 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.934 "adrfam": "ipv4", 00:27:26.934 "trsvcid": "$NVMF_PORT", 00:27:26.934 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.934 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.934 "hdgst": ${hdgst:-false}, 00:27:26.934 "ddgst": ${ddgst:-false} 00:27:26.934 }, 00:27:26.934 "method": "bdev_nvme_attach_controller" 00:27:26.934 } 00:27:26.934 EOF 00:27:26.934 )") 00:27:26.934 10:24:00 -- nvmf/common.sh@542 -- # cat 00:27:26.934 10:24:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:26.934 10:24:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:26.934 { 00:27:26.934 "params": { 00:27:26.934 "name": "Nvme$subsystem", 00:27:26.934 "trtype": "$TEST_TRANSPORT", 00:27:26.934 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.934 "adrfam": "ipv4", 00:27:26.934 "trsvcid": "$NVMF_PORT", 00:27:26.934 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.934 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.934 "hdgst": ${hdgst:-false}, 00:27:26.934 "ddgst": ${ddgst:-false} 00:27:26.934 }, 00:27:26.934 "method": "bdev_nvme_attach_controller" 00:27:26.934 } 00:27:26.934 EOF 00:27:26.934 )") 00:27:26.934 10:24:00 -- nvmf/common.sh@542 -- # cat 00:27:26.934 10:24:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:26.934 10:24:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:26.934 { 00:27:26.934 "params": { 00:27:26.934 "name": "Nvme$subsystem", 00:27:26.934 "trtype": "$TEST_TRANSPORT", 00:27:26.934 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.934 "adrfam": "ipv4", 00:27:26.934 "trsvcid": "$NVMF_PORT", 00:27:26.934 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.934 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.934 "hdgst": ${hdgst:-false}, 00:27:26.934 "ddgst": ${ddgst:-false} 00:27:26.934 }, 00:27:26.934 "method": "bdev_nvme_attach_controller" 00:27:26.934 } 00:27:26.934 EOF 00:27:26.934 )") 00:27:26.934 10:24:00 -- nvmf/common.sh@542 -- # cat 00:27:27.193 10:24:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:27.193 10:24:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:27.193 { 00:27:27.193 "params": { 00:27:27.193 "name": "Nvme$subsystem", 00:27:27.193 "trtype": "$TEST_TRANSPORT", 00:27:27.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.193 "adrfam": "ipv4", 00:27:27.193 "trsvcid": "$NVMF_PORT", 00:27:27.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.193 "hdgst": ${hdgst:-false}, 00:27:27.193 "ddgst": ${ddgst:-false} 00:27:27.193 }, 00:27:27.193 "method": "bdev_nvme_attach_controller" 00:27:27.193 } 00:27:27.193 EOF 00:27:27.193 )") 00:27:27.193 10:24:00 -- nvmf/common.sh@542 -- # cat 00:27:27.193 10:24:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:27.193 10:24:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:27.193 { 00:27:27.193 "params": { 00:27:27.193 "name": "Nvme$subsystem", 00:27:27.193 "trtype": "$TEST_TRANSPORT", 00:27:27.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.193 "adrfam": "ipv4", 00:27:27.193 "trsvcid": "$NVMF_PORT", 00:27:27.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.193 "hdgst": ${hdgst:-false}, 00:27:27.193 "ddgst": ${ddgst:-false} 00:27:27.193 }, 00:27:27.193 "method": "bdev_nvme_attach_controller" 00:27:27.193 } 00:27:27.193 EOF 00:27:27.193 )") 00:27:27.193 10:24:00 -- nvmf/common.sh@542 -- # cat 00:27:27.193 10:24:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:27.193 [2024-04-17 10:24:00.283415] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:27.193 [2024-04-17 10:24:00.283477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3566938 ] 00:27:27.193 10:24:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:27.193 { 00:27:27.193 "params": { 00:27:27.193 "name": "Nvme$subsystem", 00:27:27.193 "trtype": "$TEST_TRANSPORT", 00:27:27.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.193 "adrfam": "ipv4", 00:27:27.193 "trsvcid": "$NVMF_PORT", 00:27:27.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.193 "hdgst": ${hdgst:-false}, 00:27:27.193 "ddgst": ${ddgst:-false} 00:27:27.193 }, 00:27:27.193 "method": "bdev_nvme_attach_controller" 00:27:27.193 } 00:27:27.193 EOF 00:27:27.193 )") 00:27:27.193 10:24:00 -- nvmf/common.sh@542 -- # cat 00:27:27.193 10:24:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:27.193 10:24:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:27.193 { 00:27:27.193 "params": { 00:27:27.193 "name": "Nvme$subsystem", 00:27:27.193 "trtype": "$TEST_TRANSPORT", 00:27:27.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.193 "adrfam": "ipv4", 00:27:27.193 "trsvcid": "$NVMF_PORT", 00:27:27.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.193 "hdgst": ${hdgst:-false}, 00:27:27.193 "ddgst": ${ddgst:-false} 00:27:27.193 }, 00:27:27.193 "method": "bdev_nvme_attach_controller" 00:27:27.193 } 00:27:27.193 EOF 00:27:27.193 )") 00:27:27.193 10:24:00 -- nvmf/common.sh@542 -- # cat 00:27:27.193 10:24:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:27.193 10:24:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:27.193 { 00:27:27.193 "params": { 00:27:27.193 "name": "Nvme$subsystem", 00:27:27.193 "trtype": "$TEST_TRANSPORT", 00:27:27.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.193 "adrfam": "ipv4", 00:27:27.193 "trsvcid": "$NVMF_PORT", 00:27:27.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.193 "hdgst": ${hdgst:-false}, 00:27:27.193 "ddgst": ${ddgst:-false} 00:27:27.193 }, 00:27:27.193 "method": "bdev_nvme_attach_controller" 00:27:27.193 } 00:27:27.193 EOF 00:27:27.193 )") 00:27:27.193 10:24:00 -- nvmf/common.sh@542 -- # cat 00:27:27.193 10:24:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:27.193 10:24:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:27.193 { 00:27:27.193 "params": { 00:27:27.193 "name": "Nvme$subsystem", 00:27:27.193 "trtype": "$TEST_TRANSPORT", 00:27:27.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.193 "adrfam": "ipv4", 00:27:27.193 "trsvcid": "$NVMF_PORT", 00:27:27.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.193 "hdgst": ${hdgst:-false}, 00:27:27.193 "ddgst": ${ddgst:-false} 00:27:27.193 }, 00:27:27.193 "method": "bdev_nvme_attach_controller" 00:27:27.193 } 00:27:27.193 EOF 00:27:27.193 )") 00:27:27.193 10:24:00 -- nvmf/common.sh@542 -- # cat 00:27:27.193 10:24:00 -- nvmf/common.sh@544 -- # jq . 00:27:27.193 10:24:00 -- nvmf/common.sh@545 -- # IFS=, 00:27:27.193 10:24:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:27.193 "params": { 00:27:27.193 "name": "Nvme1", 00:27:27.193 "trtype": "tcp", 00:27:27.193 "traddr": "10.0.0.2", 00:27:27.193 "adrfam": "ipv4", 00:27:27.193 "trsvcid": "4420", 00:27:27.193 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:27.193 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:27.193 "hdgst": false, 00:27:27.193 "ddgst": false 00:27:27.193 }, 00:27:27.193 "method": "bdev_nvme_attach_controller" 00:27:27.193 },{ 00:27:27.193 "params": { 00:27:27.193 "name": "Nvme2", 00:27:27.193 "trtype": "tcp", 00:27:27.193 "traddr": "10.0.0.2", 00:27:27.193 "adrfam": "ipv4", 00:27:27.193 "trsvcid": "4420", 00:27:27.194 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:27.194 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:27.194 "hdgst": false, 00:27:27.194 "ddgst": false 00:27:27.194 }, 00:27:27.194 "method": "bdev_nvme_attach_controller" 00:27:27.194 },{ 00:27:27.194 "params": { 00:27:27.194 "name": "Nvme3", 00:27:27.194 "trtype": "tcp", 00:27:27.194 "traddr": "10.0.0.2", 00:27:27.194 "adrfam": "ipv4", 00:27:27.194 "trsvcid": "4420", 00:27:27.194 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:27.194 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:27.194 "hdgst": false, 00:27:27.194 "ddgst": false 00:27:27.194 }, 00:27:27.194 "method": "bdev_nvme_attach_controller" 00:27:27.194 },{ 00:27:27.194 "params": { 00:27:27.194 "name": "Nvme4", 00:27:27.194 "trtype": "tcp", 00:27:27.194 "traddr": "10.0.0.2", 00:27:27.194 "adrfam": "ipv4", 00:27:27.194 "trsvcid": "4420", 00:27:27.194 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:27.194 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:27.194 "hdgst": false, 00:27:27.194 "ddgst": false 00:27:27.194 }, 00:27:27.194 "method": "bdev_nvme_attach_controller" 00:27:27.194 },{ 00:27:27.194 "params": { 00:27:27.194 "name": "Nvme5", 00:27:27.194 "trtype": "tcp", 00:27:27.194 "traddr": "10.0.0.2", 00:27:27.194 "adrfam": "ipv4", 00:27:27.194 "trsvcid": "4420", 00:27:27.194 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:27.194 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:27.194 "hdgst": false, 00:27:27.194 "ddgst": false 00:27:27.194 }, 00:27:27.194 "method": "bdev_nvme_attach_controller" 00:27:27.194 },{ 00:27:27.194 "params": { 00:27:27.194 "name": "Nvme6", 00:27:27.194 "trtype": "tcp", 00:27:27.194 "traddr": "10.0.0.2", 00:27:27.194 "adrfam": "ipv4", 00:27:27.194 "trsvcid": "4420", 00:27:27.194 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:27.194 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:27.194 "hdgst": false, 00:27:27.194 "ddgst": false 00:27:27.194 }, 00:27:27.194 "method": "bdev_nvme_attach_controller" 00:27:27.194 },{ 00:27:27.194 "params": { 00:27:27.194 "name": "Nvme7", 00:27:27.194 "trtype": "tcp", 00:27:27.194 "traddr": "10.0.0.2", 00:27:27.194 "adrfam": "ipv4", 00:27:27.194 "trsvcid": "4420", 00:27:27.194 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:27.194 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:27.194 "hdgst": false, 00:27:27.194 "ddgst": false 00:27:27.194 }, 00:27:27.194 "method": "bdev_nvme_attach_controller" 00:27:27.194 },{ 00:27:27.194 "params": { 00:27:27.194 "name": "Nvme8", 00:27:27.194 "trtype": "tcp", 00:27:27.194 "traddr": "10.0.0.2", 00:27:27.194 "adrfam": "ipv4", 00:27:27.194 "trsvcid": "4420", 00:27:27.194 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:27.194 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:27.194 "hdgst": false, 00:27:27.194 "ddgst": false 00:27:27.194 }, 00:27:27.194 "method": "bdev_nvme_attach_controller" 00:27:27.194 },{ 00:27:27.194 "params": { 00:27:27.194 "name": "Nvme9", 00:27:27.194 "trtype": "tcp", 00:27:27.194 "traddr": "10.0.0.2", 00:27:27.194 "adrfam": "ipv4", 00:27:27.194 "trsvcid": "4420", 00:27:27.194 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:27.194 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:27.194 "hdgst": false, 00:27:27.194 "ddgst": false 00:27:27.194 }, 00:27:27.194 "method": "bdev_nvme_attach_controller" 00:27:27.194 },{ 00:27:27.194 "params": { 00:27:27.194 "name": "Nvme10", 00:27:27.194 "trtype": "tcp", 00:27:27.194 "traddr": "10.0.0.2", 00:27:27.194 "adrfam": "ipv4", 00:27:27.194 "trsvcid": "4420", 00:27:27.194 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:27.194 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:27.194 "hdgst": false, 00:27:27.194 "ddgst": false 00:27:27.194 }, 00:27:27.194 "method": "bdev_nvme_attach_controller" 00:27:27.194 }' 00:27:27.194 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.194 [2024-04-17 10:24:00.365708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.194 [2024-04-17 10:24:00.450284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.572 Running I/O for 10 seconds... 00:27:29.509 10:24:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:29.509 10:24:02 -- common/autotest_common.sh@852 -- # return 0 00:27:29.509 10:24:02 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:29.509 10:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.509 10:24:02 -- common/autotest_common.sh@10 -- # set +x 00:27:29.509 10:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.509 10:24:02 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:29.509 10:24:02 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:29.509 10:24:02 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:29.509 10:24:02 -- target/shutdown.sh@57 -- # local ret=1 00:27:29.509 10:24:02 -- target/shutdown.sh@58 -- # local i 00:27:29.509 10:24:02 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:29.509 10:24:02 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:29.509 10:24:02 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:29.509 10:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.509 10:24:02 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:29.509 10:24:02 -- common/autotest_common.sh@10 -- # set +x 00:27:29.509 10:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.509 10:24:02 -- target/shutdown.sh@60 -- # read_io_count=211 00:27:29.509 10:24:02 -- target/shutdown.sh@63 -- # '[' 211 -ge 100 ']' 00:27:29.509 10:24:02 -- target/shutdown.sh@64 -- # ret=0 00:27:29.509 10:24:02 -- target/shutdown.sh@65 -- # break 00:27:29.509 10:24:02 -- target/shutdown.sh@69 -- # return 0 00:27:29.509 10:24:02 -- target/shutdown.sh@109 -- # killprocess 3566938 00:27:29.509 10:24:02 -- common/autotest_common.sh@926 -- # '[' -z 3566938 ']' 00:27:29.509 10:24:02 -- common/autotest_common.sh@930 -- # kill -0 3566938 00:27:29.509 10:24:02 -- common/autotest_common.sh@931 -- # uname 00:27:29.509 10:24:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:29.509 10:24:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3566938 00:27:29.509 10:24:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:29.509 10:24:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:29.509 10:24:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3566938' 00:27:29.509 killing process with pid 3566938 00:27:29.509 10:24:02 -- common/autotest_common.sh@945 -- # kill 3566938 00:27:29.509 10:24:02 -- common/autotest_common.sh@950 -- # wait 3566938 00:27:29.509 Received shutdown signal, test time was about 0.897242 seconds 00:27:29.509 00:27:29.509 Latency(us) 00:27:29.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.509 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.509 Verification LBA range: start 0x0 length 0x400 00:27:29.509 Nvme1n1 : 0.88 358.74 22.42 0.00 0.00 173792.02 26214.40 140127.88 00:27:29.509 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.509 Verification LBA range: start 0x0 length 0x400 00:27:29.509 Nvme2n1 : 0.90 351.47 21.97 0.00 0.00 175774.93 23592.96 168725.41 00:27:29.509 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.509 Verification LBA range: start 0x0 length 0x400 00:27:29.509 Nvme3n1 : 0.88 358.09 22.38 0.00 0.00 170125.78 26333.56 138221.38 00:27:29.509 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.509 Verification LBA range: start 0x0 length 0x400 00:27:29.509 Nvme4n1 : 0.88 356.75 22.30 0.00 0.00 168681.53 27405.96 136314.88 00:27:29.509 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.510 Verification LBA range: start 0x0 length 0x400 00:27:29.510 Nvme5n1 : 0.89 355.64 22.23 0.00 0.00 167267.16 27167.65 135361.63 00:27:29.510 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.510 Verification LBA range: start 0x0 length 0x400 00:27:29.510 Nvme6n1 : 0.86 315.93 19.75 0.00 0.00 186169.33 22282.24 143940.89 00:27:29.510 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.510 Verification LBA range: start 0x0 length 0x400 00:27:29.510 Nvme7n1 : 0.89 354.24 22.14 0.00 0.00 164368.07 24665.37 137268.13 00:27:29.510 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.510 Verification LBA range: start 0x0 length 0x400 00:27:29.510 Nvme8n1 : 0.89 353.15 22.07 0.00 0.00 162979.80 24188.74 136314.88 00:27:29.510 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.510 Verification LBA range: start 0x0 length 0x400 00:27:29.510 Nvme9n1 : 0.89 352.54 22.03 0.00 0.00 161303.29 24427.05 142034.39 00:27:29.510 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:29.510 Verification LBA range: start 0x0 length 0x400 00:27:29.510 Nvme10n1 : 0.87 311.32 19.46 0.00 0.00 179662.17 24903.68 142987.64 00:27:29.510 =================================================================================================================== 00:27:29.510 Total : 3467.88 216.74 0.00 0.00 170678.31 22282.24 168725.41 00:27:29.769 10:24:03 -- target/shutdown.sh@112 -- # sleep 1 00:27:30.704 10:24:04 -- target/shutdown.sh@113 -- # kill -0 3566583 00:27:30.704 10:24:04 -- target/shutdown.sh@115 -- # stoptarget 00:27:30.704 10:24:04 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:30.704 10:24:04 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:30.704 10:24:04 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:30.704 10:24:04 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:30.704 10:24:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:30.704 10:24:04 -- nvmf/common.sh@116 -- # sync 00:27:30.704 10:24:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:30.704 10:24:04 -- nvmf/common.sh@119 -- # set +e 00:27:30.704 10:24:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:30.963 10:24:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:30.963 rmmod nvme_tcp 00:27:30.963 rmmod nvme_fabrics 00:27:30.963 rmmod nvme_keyring 00:27:30.963 10:24:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:30.963 10:24:04 -- nvmf/common.sh@123 -- # set -e 00:27:30.963 10:24:04 -- nvmf/common.sh@124 -- # return 0 00:27:30.963 10:24:04 -- nvmf/common.sh@477 -- # '[' -n 3566583 ']' 00:27:30.963 10:24:04 -- nvmf/common.sh@478 -- # killprocess 3566583 00:27:30.963 10:24:04 -- common/autotest_common.sh@926 -- # '[' -z 3566583 ']' 00:27:30.963 10:24:04 -- common/autotest_common.sh@930 -- # kill -0 3566583 00:27:30.963 10:24:04 -- common/autotest_common.sh@931 -- # uname 00:27:30.963 10:24:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:30.963 10:24:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3566583 00:27:30.963 10:24:04 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:30.963 10:24:04 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:30.963 10:24:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3566583' 00:27:30.963 killing process with pid 3566583 00:27:30.963 10:24:04 -- common/autotest_common.sh@945 -- # kill 3566583 00:27:30.963 10:24:04 -- common/autotest_common.sh@950 -- # wait 3566583 00:27:31.534 10:24:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:31.534 10:24:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:31.534 10:24:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:31.534 10:24:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:31.534 10:24:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:31.534 10:24:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.534 10:24:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:31.534 10:24:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.453 10:24:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:33.453 00:27:33.453 real 0m8.307s 00:27:33.453 user 0m25.653s 00:27:33.453 sys 0m1.452s 00:27:33.453 10:24:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:33.453 10:24:06 -- common/autotest_common.sh@10 -- # set +x 00:27:33.453 ************************************ 00:27:33.453 END TEST nvmf_shutdown_tc2 00:27:33.453 ************************************ 00:27:33.453 10:24:06 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:33.453 10:24:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:33.453 10:24:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:33.453 10:24:06 -- common/autotest_common.sh@10 -- # set +x 00:27:33.453 ************************************ 00:27:33.453 START TEST nvmf_shutdown_tc3 00:27:33.453 ************************************ 00:27:33.453 10:24:06 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:27:33.453 10:24:06 -- target/shutdown.sh@120 -- # starttarget 00:27:33.453 10:24:06 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:33.453 10:24:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:33.453 10:24:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:33.453 10:24:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:33.453 10:24:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:33.453 10:24:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:33.453 10:24:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.453 10:24:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:33.453 10:24:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.453 10:24:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:33.453 10:24:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:33.453 10:24:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:33.453 10:24:06 -- common/autotest_common.sh@10 -- # set +x 00:27:33.453 10:24:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:33.453 10:24:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:33.453 10:24:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:33.453 10:24:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:33.453 10:24:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:33.453 10:24:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:33.453 10:24:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:33.453 10:24:06 -- nvmf/common.sh@294 -- # net_devs=() 00:27:33.453 10:24:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:33.453 10:24:06 -- nvmf/common.sh@295 -- # e810=() 00:27:33.453 10:24:06 -- nvmf/common.sh@295 -- # local -ga e810 00:27:33.453 10:24:06 -- nvmf/common.sh@296 -- # x722=() 00:27:33.453 10:24:06 -- nvmf/common.sh@296 -- # local -ga x722 00:27:33.453 10:24:06 -- nvmf/common.sh@297 -- # mlx=() 00:27:33.453 10:24:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:33.453 10:24:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:33.453 10:24:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:33.453 10:24:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:33.453 10:24:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:33.453 10:24:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:33.453 10:24:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:33.453 10:24:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:33.453 10:24:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:33.453 10:24:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:33.453 10:24:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:33.453 10:24:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:33.453 10:24:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:33.453 10:24:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:33.453 10:24:06 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:33.453 10:24:06 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:33.453 10:24:06 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:33.453 10:24:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:33.453 10:24:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:33.453 10:24:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:33.453 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:33.453 10:24:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:33.453 10:24:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:33.453 10:24:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.453 10:24:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.453 10:24:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:33.453 10:24:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:33.453 10:24:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:33.453 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:33.453 10:24:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:33.453 10:24:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:33.453 10:24:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.453 10:24:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.453 10:24:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:33.453 10:24:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:33.453 10:24:06 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:33.453 10:24:06 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:33.453 10:24:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:33.453 10:24:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.453 10:24:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:33.453 10:24:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.453 10:24:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:33.453 Found net devices under 0000:af:00.0: cvl_0_0 00:27:33.453 10:24:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.453 10:24:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:33.454 10:24:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.454 10:24:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:33.454 10:24:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.454 10:24:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:33.454 Found net devices under 0000:af:00.1: cvl_0_1 00:27:33.454 10:24:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.454 10:24:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:33.454 10:24:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:33.454 10:24:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:33.454 10:24:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:33.454 10:24:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:33.454 10:24:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:33.454 10:24:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:33.454 10:24:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:33.454 10:24:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:33.454 10:24:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:33.454 10:24:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:33.454 10:24:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:33.454 10:24:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:33.454 10:24:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:33.454 10:24:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:33.454 10:24:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:33.454 10:24:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:33.454 10:24:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:33.718 10:24:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:33.718 10:24:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:33.718 10:24:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:33.718 10:24:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:33.718 10:24:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:33.718 10:24:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:33.718 10:24:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:33.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:33.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:27:33.718 00:27:33.718 --- 10.0.0.2 ping statistics --- 00:27:33.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.718 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:27:33.718 10:24:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:33.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:33.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:27:33.718 00:27:33.718 --- 10.0.0.1 ping statistics --- 00:27:33.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.718 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:27:33.718 10:24:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:33.718 10:24:06 -- nvmf/common.sh@410 -- # return 0 00:27:33.718 10:24:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:33.718 10:24:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:33.718 10:24:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:33.718 10:24:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:33.718 10:24:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:33.718 10:24:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:33.718 10:24:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:33.718 10:24:06 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:33.718 10:24:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:33.718 10:24:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:33.718 10:24:06 -- common/autotest_common.sh@10 -- # set +x 00:27:33.718 10:24:06 -- nvmf/common.sh@469 -- # nvmfpid=3568229 00:27:33.718 10:24:06 -- nvmf/common.sh@470 -- # waitforlisten 3568229 00:27:33.718 10:24:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:33.718 10:24:06 -- common/autotest_common.sh@819 -- # '[' -z 3568229 ']' 00:27:33.718 10:24:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.718 10:24:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:33.718 10:24:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.718 10:24:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:33.718 10:24:06 -- common/autotest_common.sh@10 -- # set +x 00:27:33.718 [2024-04-17 10:24:07.033899] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:33.718 [2024-04-17 10:24:07.033952] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:33.977 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.977 [2024-04-17 10:24:07.112511] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:33.977 [2024-04-17 10:24:07.200052] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:33.977 [2024-04-17 10:24:07.200200] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:33.977 [2024-04-17 10:24:07.200212] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:33.977 [2024-04-17 10:24:07.200221] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:33.977 [2024-04-17 10:24:07.200325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:33.977 [2024-04-17 10:24:07.200437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:33.977 [2024-04-17 10:24:07.200548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:33.977 [2024-04-17 10:24:07.200548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.912 10:24:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:34.912 10:24:07 -- common/autotest_common.sh@852 -- # return 0 00:27:34.912 10:24:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:34.912 10:24:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:34.912 10:24:07 -- common/autotest_common.sh@10 -- # set +x 00:27:34.912 10:24:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.912 10:24:08 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:34.912 10:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:34.913 10:24:08 -- common/autotest_common.sh@10 -- # set +x 00:27:34.913 [2024-04-17 10:24:08.008446] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.913 10:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:34.913 10:24:08 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:34.913 10:24:08 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:34.913 10:24:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:34.913 10:24:08 -- common/autotest_common.sh@10 -- # set +x 00:27:34.913 10:24:08 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:34.913 10:24:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.913 10:24:08 -- target/shutdown.sh@28 -- # cat 00:27:34.913 10:24:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.913 10:24:08 -- target/shutdown.sh@28 -- # cat 00:27:34.913 10:24:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.913 10:24:08 -- target/shutdown.sh@28 -- # cat 00:27:34.913 10:24:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.913 10:24:08 -- target/shutdown.sh@28 -- # cat 00:27:34.913 10:24:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.913 10:24:08 -- target/shutdown.sh@28 -- # cat 00:27:34.913 10:24:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.913 10:24:08 -- target/shutdown.sh@28 -- # cat 00:27:34.913 10:24:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.913 10:24:08 -- target/shutdown.sh@28 -- # cat 00:27:34.913 10:24:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.913 10:24:08 -- target/shutdown.sh@28 -- # cat 00:27:34.913 10:24:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.913 10:24:08 -- target/shutdown.sh@28 -- # cat 00:27:34.913 10:24:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:34.913 10:24:08 -- target/shutdown.sh@28 -- # cat 00:27:34.913 10:24:08 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:34.913 10:24:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:34.913 10:24:08 -- common/autotest_common.sh@10 -- # set +x 00:27:34.913 Malloc1 00:27:34.913 [2024-04-17 10:24:08.108154] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.913 Malloc2 00:27:34.913 Malloc3 00:27:34.913 Malloc4 00:27:35.244 Malloc5 00:27:35.244 Malloc6 00:27:35.244 Malloc7 00:27:35.244 Malloc8 00:27:35.244 Malloc9 00:27:35.244 Malloc10 00:27:35.244 10:24:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:35.244 10:24:08 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:35.244 10:24:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:35.244 10:24:08 -- common/autotest_common.sh@10 -- # set +x 00:27:35.244 10:24:08 -- target/shutdown.sh@124 -- # perfpid=3568547 00:27:35.244 10:24:08 -- target/shutdown.sh@125 -- # waitforlisten 3568547 /var/tmp/bdevperf.sock 00:27:35.244 10:24:08 -- common/autotest_common.sh@819 -- # '[' -z 3568547 ']' 00:27:35.244 10:24:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:35.244 10:24:08 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:35.244 10:24:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:35.244 10:24:08 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:35.244 10:24:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:35.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:35.244 10:24:08 -- nvmf/common.sh@520 -- # config=() 00:27:35.244 10:24:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:35.244 10:24:08 -- nvmf/common.sh@520 -- # local subsystem config 00:27:35.244 10:24:08 -- common/autotest_common.sh@10 -- # set +x 00:27:35.244 10:24:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:35.244 10:24:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:35.244 { 00:27:35.244 "params": { 00:27:35.244 "name": "Nvme$subsystem", 00:27:35.244 "trtype": "$TEST_TRANSPORT", 00:27:35.244 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.244 "adrfam": "ipv4", 00:27:35.244 "trsvcid": "$NVMF_PORT", 00:27:35.244 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.244 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.244 "hdgst": ${hdgst:-false}, 00:27:35.244 "ddgst": ${ddgst:-false} 00:27:35.244 }, 00:27:35.244 "method": "bdev_nvme_attach_controller" 00:27:35.244 } 00:27:35.244 EOF 00:27:35.245 )") 00:27:35.245 10:24:08 -- nvmf/common.sh@542 -- # cat 00:27:35.245 10:24:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:35.245 10:24:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:35.245 { 00:27:35.245 "params": { 00:27:35.245 "name": "Nvme$subsystem", 00:27:35.245 "trtype": "$TEST_TRANSPORT", 00:27:35.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.245 "adrfam": "ipv4", 00:27:35.245 "trsvcid": "$NVMF_PORT", 00:27:35.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.245 "hdgst": ${hdgst:-false}, 00:27:35.245 "ddgst": ${ddgst:-false} 00:27:35.245 }, 00:27:35.245 "method": "bdev_nvme_attach_controller" 00:27:35.245 } 00:27:35.245 EOF 00:27:35.245 )") 00:27:35.245 10:24:08 -- nvmf/common.sh@542 -- # cat 00:27:35.245 10:24:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:35.245 10:24:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:35.245 { 00:27:35.245 "params": { 00:27:35.245 "name": "Nvme$subsystem", 00:27:35.245 "trtype": "$TEST_TRANSPORT", 00:27:35.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.245 "adrfam": "ipv4", 00:27:35.245 "trsvcid": "$NVMF_PORT", 00:27:35.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.245 "hdgst": ${hdgst:-false}, 00:27:35.245 "ddgst": ${ddgst:-false} 00:27:35.245 }, 00:27:35.245 "method": "bdev_nvme_attach_controller" 00:27:35.245 } 00:27:35.245 EOF 00:27:35.245 )") 00:27:35.245 10:24:08 -- nvmf/common.sh@542 -- # cat 00:27:35.245 10:24:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:35.245 10:24:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:35.245 { 00:27:35.245 "params": { 00:27:35.245 "name": "Nvme$subsystem", 00:27:35.245 "trtype": "$TEST_TRANSPORT", 00:27:35.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.245 "adrfam": "ipv4", 00:27:35.245 "trsvcid": "$NVMF_PORT", 00:27:35.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.245 "hdgst": ${hdgst:-false}, 00:27:35.245 "ddgst": ${ddgst:-false} 00:27:35.245 }, 00:27:35.245 "method": "bdev_nvme_attach_controller" 00:27:35.245 } 00:27:35.245 EOF 00:27:35.245 )") 00:27:35.245 10:24:08 -- nvmf/common.sh@542 -- # cat 00:27:35.245 10:24:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:35.245 10:24:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:35.245 { 00:27:35.245 "params": { 00:27:35.245 "name": "Nvme$subsystem", 00:27:35.245 "trtype": "$TEST_TRANSPORT", 00:27:35.245 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.245 "adrfam": "ipv4", 00:27:35.245 "trsvcid": "$NVMF_PORT", 00:27:35.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.245 "hdgst": ${hdgst:-false}, 00:27:35.245 "ddgst": ${ddgst:-false} 00:27:35.245 }, 00:27:35.245 "method": "bdev_nvme_attach_controller" 00:27:35.245 } 00:27:35.245 EOF 00:27:35.245 )") 00:27:35.245 10:24:08 -- nvmf/common.sh@542 -- # cat 00:27:35.504 10:24:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:35.504 10:24:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:35.504 { 00:27:35.504 "params": { 00:27:35.504 "name": "Nvme$subsystem", 00:27:35.504 "trtype": "$TEST_TRANSPORT", 00:27:35.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.504 "adrfam": "ipv4", 00:27:35.504 "trsvcid": "$NVMF_PORT", 00:27:35.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.504 "hdgst": ${hdgst:-false}, 00:27:35.504 "ddgst": ${ddgst:-false} 00:27:35.504 }, 00:27:35.504 "method": "bdev_nvme_attach_controller" 00:27:35.504 } 00:27:35.504 EOF 00:27:35.504 )") 00:27:35.504 10:24:08 -- nvmf/common.sh@542 -- # cat 00:27:35.504 [2024-04-17 10:24:08.584939] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:35.504 [2024-04-17 10:24:08.584998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3568547 ] 00:27:35.504 10:24:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:35.504 10:24:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:35.504 { 00:27:35.504 "params": { 00:27:35.504 "name": "Nvme$subsystem", 00:27:35.504 "trtype": "$TEST_TRANSPORT", 00:27:35.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.504 "adrfam": "ipv4", 00:27:35.504 "trsvcid": "$NVMF_PORT", 00:27:35.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.504 "hdgst": ${hdgst:-false}, 00:27:35.504 "ddgst": ${ddgst:-false} 00:27:35.504 }, 00:27:35.505 "method": "bdev_nvme_attach_controller" 00:27:35.505 } 00:27:35.505 EOF 00:27:35.505 )") 00:27:35.505 10:24:08 -- nvmf/common.sh@542 -- # cat 00:27:35.505 10:24:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:35.505 10:24:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:35.505 { 00:27:35.505 "params": { 00:27:35.505 "name": "Nvme$subsystem", 00:27:35.505 "trtype": "$TEST_TRANSPORT", 00:27:35.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.505 "adrfam": "ipv4", 00:27:35.505 "trsvcid": "$NVMF_PORT", 00:27:35.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.505 "hdgst": ${hdgst:-false}, 00:27:35.505 "ddgst": ${ddgst:-false} 00:27:35.505 }, 00:27:35.505 "method": "bdev_nvme_attach_controller" 00:27:35.505 } 00:27:35.505 EOF 00:27:35.505 )") 00:27:35.505 10:24:08 -- nvmf/common.sh@542 -- # cat 00:27:35.505 10:24:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:35.505 10:24:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:35.505 { 00:27:35.505 "params": { 00:27:35.505 "name": "Nvme$subsystem", 00:27:35.505 "trtype": "$TEST_TRANSPORT", 00:27:35.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.505 "adrfam": "ipv4", 00:27:35.505 "trsvcid": "$NVMF_PORT", 00:27:35.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.505 "hdgst": ${hdgst:-false}, 00:27:35.505 "ddgst": ${ddgst:-false} 00:27:35.505 }, 00:27:35.505 "method": "bdev_nvme_attach_controller" 00:27:35.505 } 00:27:35.505 EOF 00:27:35.505 )") 00:27:35.505 10:24:08 -- nvmf/common.sh@542 -- # cat 00:27:35.505 10:24:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:35.505 10:24:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:35.505 { 00:27:35.505 "params": { 00:27:35.505 "name": "Nvme$subsystem", 00:27:35.505 "trtype": "$TEST_TRANSPORT", 00:27:35.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.505 "adrfam": "ipv4", 00:27:35.505 "trsvcid": "$NVMF_PORT", 00:27:35.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.505 "hdgst": ${hdgst:-false}, 00:27:35.505 "ddgst": ${ddgst:-false} 00:27:35.505 }, 00:27:35.505 "method": "bdev_nvme_attach_controller" 00:27:35.505 } 00:27:35.505 EOF 00:27:35.505 )") 00:27:35.505 10:24:08 -- nvmf/common.sh@542 -- # cat 00:27:35.505 10:24:08 -- nvmf/common.sh@544 -- # jq . 00:27:35.505 10:24:08 -- nvmf/common.sh@545 -- # IFS=, 00:27:35.505 10:24:08 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:35.505 "params": { 00:27:35.505 "name": "Nvme1", 00:27:35.505 "trtype": "tcp", 00:27:35.505 "traddr": "10.0.0.2", 00:27:35.505 "adrfam": "ipv4", 00:27:35.505 "trsvcid": "4420", 00:27:35.505 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:35.505 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:35.505 "hdgst": false, 00:27:35.505 "ddgst": false 00:27:35.505 }, 00:27:35.505 "method": "bdev_nvme_attach_controller" 00:27:35.505 },{ 00:27:35.505 "params": { 00:27:35.505 "name": "Nvme2", 00:27:35.505 "trtype": "tcp", 00:27:35.505 "traddr": "10.0.0.2", 00:27:35.505 "adrfam": "ipv4", 00:27:35.505 "trsvcid": "4420", 00:27:35.505 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:35.505 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:35.505 "hdgst": false, 00:27:35.505 "ddgst": false 00:27:35.505 }, 00:27:35.505 "method": "bdev_nvme_attach_controller" 00:27:35.505 },{ 00:27:35.505 "params": { 00:27:35.505 "name": "Nvme3", 00:27:35.505 "trtype": "tcp", 00:27:35.505 "traddr": "10.0.0.2", 00:27:35.505 "adrfam": "ipv4", 00:27:35.505 "trsvcid": "4420", 00:27:35.505 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:35.505 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:35.505 "hdgst": false, 00:27:35.505 "ddgst": false 00:27:35.505 }, 00:27:35.505 "method": "bdev_nvme_attach_controller" 00:27:35.505 },{ 00:27:35.505 "params": { 00:27:35.505 "name": "Nvme4", 00:27:35.505 "trtype": "tcp", 00:27:35.505 "traddr": "10.0.0.2", 00:27:35.505 "adrfam": "ipv4", 00:27:35.505 "trsvcid": "4420", 00:27:35.505 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:35.505 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:35.505 "hdgst": false, 00:27:35.505 "ddgst": false 00:27:35.505 }, 00:27:35.505 "method": "bdev_nvme_attach_controller" 00:27:35.505 },{ 00:27:35.505 "params": { 00:27:35.505 "name": "Nvme5", 00:27:35.505 "trtype": "tcp", 00:27:35.505 "traddr": "10.0.0.2", 00:27:35.505 "adrfam": "ipv4", 00:27:35.505 "trsvcid": "4420", 00:27:35.505 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:35.505 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:35.505 "hdgst": false, 00:27:35.505 "ddgst": false 00:27:35.505 }, 00:27:35.505 "method": "bdev_nvme_attach_controller" 00:27:35.505 },{ 00:27:35.505 "params": { 00:27:35.505 "name": "Nvme6", 00:27:35.505 "trtype": "tcp", 00:27:35.505 "traddr": "10.0.0.2", 00:27:35.505 "adrfam": "ipv4", 00:27:35.505 "trsvcid": "4420", 00:27:35.505 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:35.505 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:35.505 "hdgst": false, 00:27:35.505 "ddgst": false 00:27:35.505 }, 00:27:35.505 "method": "bdev_nvme_attach_controller" 00:27:35.505 },{ 00:27:35.505 "params": { 00:27:35.505 "name": "Nvme7", 00:27:35.505 "trtype": "tcp", 00:27:35.505 "traddr": "10.0.0.2", 00:27:35.505 "adrfam": "ipv4", 00:27:35.505 "trsvcid": "4420", 00:27:35.505 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:35.505 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:35.505 "hdgst": false, 00:27:35.505 "ddgst": false 00:27:35.505 }, 00:27:35.505 "method": "bdev_nvme_attach_controller" 00:27:35.505 },{ 00:27:35.505 "params": { 00:27:35.505 "name": "Nvme8", 00:27:35.505 "trtype": "tcp", 00:27:35.505 "traddr": "10.0.0.2", 00:27:35.505 "adrfam": "ipv4", 00:27:35.505 "trsvcid": "4420", 00:27:35.505 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:35.505 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:35.505 "hdgst": false, 00:27:35.505 "ddgst": false 00:27:35.505 }, 00:27:35.505 "method": "bdev_nvme_attach_controller" 00:27:35.505 },{ 00:27:35.505 "params": { 00:27:35.505 "name": "Nvme9", 00:27:35.505 "trtype": "tcp", 00:27:35.505 "traddr": "10.0.0.2", 00:27:35.505 "adrfam": "ipv4", 00:27:35.505 "trsvcid": "4420", 00:27:35.505 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:35.505 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:35.505 "hdgst": false, 00:27:35.505 "ddgst": false 00:27:35.505 }, 00:27:35.505 "method": "bdev_nvme_attach_controller" 00:27:35.505 },{ 00:27:35.505 "params": { 00:27:35.505 "name": "Nvme10", 00:27:35.505 "trtype": "tcp", 00:27:35.505 "traddr": "10.0.0.2", 00:27:35.505 "adrfam": "ipv4", 00:27:35.505 "trsvcid": "4420", 00:27:35.505 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:35.505 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:35.505 "hdgst": false, 00:27:35.505 "ddgst": false 00:27:35.505 }, 00:27:35.505 "method": "bdev_nvme_attach_controller" 00:27:35.505 }' 00:27:35.505 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.505 [2024-04-17 10:24:08.667958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.505 [2024-04-17 10:24:08.750916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.409 Running I/O for 10 seconds... 00:27:37.409 10:24:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:37.409 10:24:10 -- common/autotest_common.sh@852 -- # return 0 00:27:37.409 10:24:10 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:37.409 10:24:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:37.409 10:24:10 -- common/autotest_common.sh@10 -- # set +x 00:27:37.409 10:24:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:37.409 10:24:10 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:37.409 10:24:10 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:37.409 10:24:10 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:37.409 10:24:10 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:37.409 10:24:10 -- target/shutdown.sh@57 -- # local ret=1 00:27:37.409 10:24:10 -- target/shutdown.sh@58 -- # local i 00:27:37.409 10:24:10 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:37.409 10:24:10 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:37.409 10:24:10 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:37.409 10:24:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:37.409 10:24:10 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:37.409 10:24:10 -- common/autotest_common.sh@10 -- # set +x 00:27:37.409 10:24:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:37.409 10:24:10 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:37.409 10:24:10 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:37.409 10:24:10 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:37.409 10:24:10 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:37.409 10:24:10 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:37.409 10:24:10 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:37.409 10:24:10 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:37.409 10:24:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:37.409 10:24:10 -- common/autotest_common.sh@10 -- # set +x 00:27:37.409 10:24:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:37.409 10:24:10 -- target/shutdown.sh@60 -- # read_io_count=129 00:27:37.409 10:24:10 -- target/shutdown.sh@63 -- # '[' 129 -ge 100 ']' 00:27:37.409 10:24:10 -- target/shutdown.sh@64 -- # ret=0 00:27:37.409 10:24:10 -- target/shutdown.sh@65 -- # break 00:27:37.409 10:24:10 -- target/shutdown.sh@69 -- # return 0 00:27:37.409 10:24:10 -- target/shutdown.sh@134 -- # killprocess 3568229 00:27:37.409 10:24:10 -- common/autotest_common.sh@926 -- # '[' -z 3568229 ']' 00:27:37.409 10:24:10 -- common/autotest_common.sh@930 -- # kill -0 3568229 00:27:37.409 10:24:10 -- common/autotest_common.sh@931 -- # uname 00:27:37.409 10:24:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:37.409 10:24:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3568229 00:27:37.683 10:24:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:37.683 10:24:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:37.683 10:24:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3568229' 00:27:37.683 killing process with pid 3568229 00:27:37.683 10:24:10 -- common/autotest_common.sh@945 -- # kill 3568229 00:27:37.683 10:24:10 -- common/autotest_common.sh@950 -- # wait 3568229 00:27:37.684 [2024-04-17 10:24:10.753044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753540] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.753661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16860 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.754613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.754627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.754636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.754652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.754662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.754670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.754679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.754688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.754697] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.754705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.754714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.754722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.754731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.754740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.754748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.754757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.684 [2024-04-17 10:24:10.754765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.754773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.754783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.754794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.754803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.754811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.754819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.754829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.754838] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.754847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.754857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.754865] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.754874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.754883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.754892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.754900] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.754909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.754917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.754925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.754936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.754945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.754954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.754962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.754971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.754980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.754989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.754998] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.755006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.755015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.755023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.755033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.755043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.755051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.755060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.755068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.755076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.755085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.755094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.755103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.755111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.755119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.755128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.755137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.755146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.755155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.755163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.755171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa14360 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.685 [2024-04-17 10:24:10.757247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757326] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.757419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15150 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.758130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15600 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.758772] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15a90 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.758808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15a90 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15f20 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa15f20 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759614] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759649] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759709] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759775] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759786] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759834] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759838] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.686 [2024-04-17 10:24:10.759854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.687 [2024-04-17 10:24:10.759859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.687 [2024-04-17 10:24:10.759865] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.687 [2024-04-17 10:24:10.759870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.687 [2024-04-17 10:24:10.759875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.687 [2024-04-17 10:24:10.759881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.687 [2024-04-17 10:24:10.759886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.687 [2024-04-17 10:24:10.759892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.687 [2024-04-17 10:24:10.759898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.687 [2024-04-17 10:24:10.759903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.687 [2024-04-17 10:24:10.759909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.687 [2024-04-17 10:24:10.759914] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.687 [2024-04-17 10:24:10.759920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.687 [2024-04-17 10:24:10.759925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.687 [2024-04-17 10:24:10.759930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.687 [2024-04-17 10:24:10.759936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.687 [2024-04-17 10:24:10.759941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.687 [2024-04-17 10:24:10.759945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa163d0 is same with the state(5) to be set 00:27:37.687 [2024-04-17 10:24:10.792433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.687 [2024-04-17 10:24:10.792493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.687 [2024-04-17 10:24:10.792507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.687 [2024-04-17 10:24:10.792518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.687 [2024-04-17 10:24:10.792529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.687 [2024-04-17 10:24:10.792540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.687 [2024-04-17 10:24:10.792552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.687 [2024-04-17 10:24:10.792562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.687 [2024-04-17 10:24:10.792576] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d46a0 is same with the state(5) to be set 00:27:37.687 [2024-04-17 10:24:10.792624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.687 [2024-04-17 10:24:10.792637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.687 [2024-04-17 10:24:10.792657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.687 [2024-04-17 10:24:10.792669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.687 [2024-04-17 10:24:10.792681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.687 [2024-04-17 10:24:10.792692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.687 [2024-04-17 10:24:10.792712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.687 [2024-04-17 10:24:10.792721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.687 [2024-04-17 10:24:10.792732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4b80 is same with the state(5) to be set 00:27:37.687 [2024-04-17 10:24:10.792769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.687 [2024-04-17 10:24:10.792783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.687 [2024-04-17 10:24:10.792795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.687 [2024-04-17 10:24:10.792805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.687 [2024-04-17 10:24:10.792816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.687 [2024-04-17 10:24:10.792826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.687 [2024-04-17 10:24:10.792838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.687 [2024-04-17 10:24:10.792848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.687 [2024-04-17 10:24:10.792857] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4e20 is same with the state(5) to be set 00:27:37.687 [2024-04-17 10:24:10.792892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.687 [2024-04-17 10:24:10.792903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.687 [2024-04-17 10:24:10.792914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.687 [2024-04-17 10:24:10.792923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.687 [2024-04-17 10:24:10.792935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.687 [2024-04-17 10:24:10.792946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.687 [2024-04-17 10:24:10.792956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.687 [2024-04-17 10:24:10.792966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.687 [2024-04-17 10:24:10.792975] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9139b0 is same with the state(5) to be set 00:27:37.687 [2024-04-17 10:24:10.793006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.687 [2024-04-17 10:24:10.793018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.687 [2024-04-17 10:24:10.793028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.687 [2024-04-17 10:24:10.793038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.687 [2024-04-17 10:24:10.793051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.687 [2024-04-17 10:24:10.793061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.687 [2024-04-17 10:24:10.793072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.687 [2024-04-17 10:24:10.793082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.687 [2024-04-17 10:24:10.793091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934b20 is same with the state(5) to be set 00:27:37.687 [2024-04-17 10:24:10.793123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.687 [2024-04-17 10:24:10.793135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.687 [2024-04-17 10:24:10.793146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.687 [2024-04-17 10:24:10.793156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.687 [2024-04-17 10:24:10.793166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.687 [2024-04-17 10:24:10.793176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.687 [2024-04-17 10:24:10.793186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.687 [2024-04-17 10:24:10.793195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.687 [2024-04-17 10:24:10.793205] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9324b0 is same with the state(5) to be set 00:27:37.687 [2024-04-17 10:24:10.793237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.687 [2024-04-17 10:24:10.793250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.687 [2024-04-17 10:24:10.793261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.688 [2024-04-17 10:24:10.793271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.688 [2024-04-17 10:24:10.793282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.688 [2024-04-17 10:24:10.793292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.688 [2024-04-17 10:24:10.793302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.688 [2024-04-17 10:24:10.793311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.688 [2024-04-17 10:24:10.793321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5250 is same with the state(5) to be set 00:27:37.688 [2024-04-17 10:24:10.793354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.688 [2024-04-17 10:24:10.793366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.688 [2024-04-17 10:24:10.793377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.688 [2024-04-17 10:24:10.793389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.688 [2024-04-17 10:24:10.793399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.688 [2024-04-17 10:24:10.793409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.688 [2024-04-17 10:24:10.793420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.688 [2024-04-17 10:24:10.793429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.688 [2024-04-17 10:24:10.793439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92c970 is same with the state(5) to be set 00:27:37.688 [2024-04-17 10:24:10.793470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.688 [2024-04-17 10:24:10.793481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.688 [2024-04-17 10:24:10.793492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.688 [2024-04-17 10:24:10.793502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.688 [2024-04-17 10:24:10.793513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.688 [2024-04-17 10:24:10.793522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.688 [2024-04-17 10:24:10.793533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.688 [2024-04-17 10:24:10.793543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.688 [2024-04-17 10:24:10.793555] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874520 is same with the state(5) to be set 00:27:37.688 [2024-04-17 10:24:10.793584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.688 [2024-04-17 10:24:10.793596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.688 [2024-04-17 10:24:10.793607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.688 [2024-04-17 10:24:10.793617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.688 [2024-04-17 10:24:10.793627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.688 [2024-04-17 10:24:10.793637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.688 [2024-04-17 10:24:10.793655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.688 [2024-04-17 10:24:10.793665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.688 [2024-04-17 10:24:10.793674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x910c50 is same with the state(5) to be set 00:27:37.688 [2024-04-17 10:24:10.793733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.688 [2024-04-17 10:24:10.793746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.688 [2024-04-17 10:24:10.793769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.688 [2024-04-17 10:24:10.793780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.688 [2024-04-17 10:24:10.793793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.688 [2024-04-17 10:24:10.793804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.688 [2024-04-17 10:24:10.793817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.688 [2024-04-17 10:24:10.793827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.688 [2024-04-17 10:24:10.793840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.688 [2024-04-17 10:24:10.793849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.688 [2024-04-17 10:24:10.793862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.688 [2024-04-17 10:24:10.793872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.688 [2024-04-17 10:24:10.793884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.688 [2024-04-17 10:24:10.793894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.688 [2024-04-17 10:24:10.793906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.688 [2024-04-17 10:24:10.793916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.688 [2024-04-17 10:24:10.793928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.688 [2024-04-17 10:24:10.793938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.688 [2024-04-17 10:24:10.793950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.688 [2024-04-17 10:24:10.793959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.688 [2024-04-17 10:24:10.793972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.688 [2024-04-17 10:24:10.793982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.688 [2024-04-17 10:24:10.793994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.688 [2024-04-17 10:24:10.794004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.688 [2024-04-17 10:24:10.794016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.688 [2024-04-17 10:24:10.794026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.689 [2024-04-17 10:24:10.794936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.689 [2024-04-17 10:24:10.794949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.794959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.794972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.794982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.794994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795219] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacd7c0 is same with the state(5) to be set 00:27:37.690 [2024-04-17 10:24:10.795286] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xacd7c0 was disconnected and freed. reset controller. 00:27:37.690 [2024-04-17 10:24:10.795405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.690 [2024-04-17 10:24:10.795984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.690 [2024-04-17 10:24:10.795993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.691 [2024-04-17 10:24:10.796835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.691 [2024-04-17 10:24:10.796848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.796858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.796934] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x901800 was disconnected and freed. reset controller. 00:27:37.692 [2024-04-17 10:24:10.796981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.796994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.797009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.797019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.797031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.797042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.797055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.797065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.807919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.807944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.807961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.807975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.807992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.692 [2024-04-17 10:24:10.808964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.692 [2024-04-17 10:24:10.808982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.808994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.809011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.809025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.809041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.809054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.809072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.809085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.809101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.809115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.809132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.809145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.809162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.809177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.809193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.809206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.809223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.809239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.809255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.809271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.809289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.809304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.809321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.809335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.809352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.809365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.809382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.809396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.809412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.809425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.809443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.809456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.809473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.809487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.809504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.809517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.809533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.809548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.809565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.809579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.809596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.809609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.809628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.809659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.809677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.809691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.809708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.809721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.809738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.809752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.809768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.809782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.809798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5e0b0 is same with the state(5) to be set 00:27:37.693 [2024-04-17 10:24:10.809866] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa5e0b0 was disconnected and freed. reset controller. 00:27:37.693 [2024-04-17 10:24:10.810016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.810036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.810055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.810070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.810086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.810100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.810117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.693 [2024-04-17 10:24:10.810131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.693 [2024-04-17 10:24:10.810147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.810160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.810179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.810194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.810210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.810224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.810245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.810259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.810275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.810288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.810305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.810318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.810335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.810348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.810365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.810378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.810394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.810408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.810424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.810437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.810454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.810469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.810486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.810500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.810516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.810529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.810546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.810560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.810576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.810589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.810605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.810622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.810638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.810662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.810680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.810692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.810709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.810724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.810749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.810762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.810779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.810793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.810812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.810825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.810843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.810856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.810872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.810886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.810903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.810916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.810934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.810946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.810963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.810977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.810994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.811008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.811028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.811041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.811058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.811071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.811088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.811101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.811118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.811132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.811150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.811163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.811181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.811194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.811212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.811225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.811242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.811255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.811272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.811286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.811303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.811316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.811333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.694 [2024-04-17 10:24:10.811346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.694 [2024-04-17 10:24:10.811363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.811376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.811393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.811409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.811426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.811440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.811456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.811472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.811488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.811502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.811519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.811533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.811550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.811563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.811579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.811593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.811610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.811624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.811641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.811659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.811676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.811690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.811707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.811721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.811737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.811752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.811768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.811782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.811801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.811815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.811831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.811845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.811862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.811876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.811893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.811907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.811924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.811938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.811954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.811969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.811986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.812000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.812016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b9c50 is same with the state(5) to be set 00:27:37.695 [2024-04-17 10:24:10.812083] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9b9c50 was disconnected and freed. reset controller. 00:27:37.695 [2024-04-17 10:24:10.812139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.812156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.812176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.812190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.812207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.812221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.812238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.812252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.812269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.812287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.812304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.812319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.812337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.812351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.812368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.812382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.812399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.812413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.812430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.812443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.812461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.812474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.812492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.812505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.812522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.812535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.812552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.812566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.812583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.812597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.812614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.812628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.812652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.812667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.695 [2024-04-17 10:24:10.812686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.695 [2024-04-17 10:24:10.812700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.812717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.812731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.812748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.812762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.812779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.812793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.812810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.812823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.812841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.812854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.812870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.812884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.812901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.812915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.812932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.812946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.812964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.812978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.812995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.813010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.813028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.813041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.813058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.813079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.813097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.813111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.813128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.813142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.813160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.813175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.813192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.813207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.813223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.813238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.813255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.813269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.813286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.813300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.813319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.813333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.813350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.813364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.813381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.813395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.813411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.813425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.813442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.813457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.813476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.813491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.813508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.813523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.813540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.813553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.813570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.813587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.813604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.813617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.813634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.820637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.820671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.820688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.820707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.820723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.820743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.820758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.820777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.820793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.820813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.820828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.820847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.820863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.820882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.820902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.820921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.820937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.696 [2024-04-17 10:24:10.820955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.696 [2024-04-17 10:24:10.820971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.820990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.821006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.821024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.821040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.821059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.821074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.821093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.821109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.821128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.821145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.821165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.821180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.821200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.821216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.821233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb230 is same with the state(5) to be set 00:27:37.697 [2024-04-17 10:24:10.821305] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9bb230 was disconnected and freed. reset controller. 00:27:37.697 [2024-04-17 10:24:10.838343] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d46a0 (9): Bad file descriptor 00:27:37.697 [2024-04-17 10:24:10.838407] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a4b80 (9): Bad file descriptor 00:27:37.697 [2024-04-17 10:24:10.838431] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d4e20 (9): Bad file descriptor 00:27:37.697 [2024-04-17 10:24:10.838453] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9139b0 (9): Bad file descriptor 00:27:37.697 [2024-04-17 10:24:10.838475] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x934b20 (9): Bad file descriptor 00:27:37.697 [2024-04-17 10:24:10.838510] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9324b0 (9): Bad file descriptor 00:27:37.697 [2024-04-17 10:24:10.838532] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d5250 (9): Bad file descriptor 00:27:37.697 [2024-04-17 10:24:10.838554] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x92c970 (9): Bad file descriptor 00:27:37.697 [2024-04-17 10:24:10.838575] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x874520 (9): Bad file descriptor 00:27:37.697 [2024-04-17 10:24:10.838601] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x910c50 (9): Bad file descriptor 00:27:37.697 [2024-04-17 10:24:10.847389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.847428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.847447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.847457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.847471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.847481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.847494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.847503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.847516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.847527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.847539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.847549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.847560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.847570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.847582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.847592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.847605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.847615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.847628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.847638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.847662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.847673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.847685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.847697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.847709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.847720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.847732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.847742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.847754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.847764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.847777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.847787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.847799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.847810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.847822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.847832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.847845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.847856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.847868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.847879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.847891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.847900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.697 [2024-04-17 10:24:10.847913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.697 [2024-04-17 10:24:10.847923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.847935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.847947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.847960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.847970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.847982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.847992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-04-17 10:24:10.848739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.698 [2024-04-17 10:24:10.848751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.848763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.699 [2024-04-17 10:24:10.848776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.848786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.699 [2024-04-17 10:24:10.848801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.848811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.699 [2024-04-17 10:24:10.848823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.848833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.699 [2024-04-17 10:24:10.848845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.848855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.699 [2024-04-17 10:24:10.848867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.848876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.699 [2024-04-17 10:24:10.848960] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xace0a0 was disconnected and freed. reset controller. 00:27:37.699 [2024-04-17 10:24:10.849172] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.699 [2024-04-17 10:24:10.849235] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:37.699 [2024-04-17 10:24:10.849269] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:37.699 [2024-04-17 10:24:10.849293] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:37.699 [2024-04-17 10:24:10.849308] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:37.699 [2024-04-17 10:24:10.849323] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:37.699 [2024-04-17 10:24:10.850953] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:37.699 [2024-04-17 10:24:10.850986] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:37.699 [2024-04-17 10:24:10.851267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.699 [2024-04-17 10:24:10.851389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.699 [2024-04-17 10:24:10.851404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x910c50 with addr=10.0.0.2, port=4420 00:27:37.699 [2024-04-17 10:24:10.851416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x910c50 is same with the state(5) to be set 00:27:37.699 [2024-04-17 10:24:10.851790] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:37.699 [2024-04-17 10:24:10.852461] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:37.699 [2024-04-17 10:24:10.853153] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:37.699 [2024-04-17 10:24:10.853209] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:37.699 [2024-04-17 10:24:10.853259] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:37.699 [2024-04-17 10:24:10.853274] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:37.699 [2024-04-17 10:24:10.853528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.699 [2024-04-17 10:24:10.853781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.699 [2024-04-17 10:24:10.853797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9139b0 with addr=10.0.0.2, port=4420 00:27:37.699 [2024-04-17 10:24:10.853808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9139b0 is same with the state(5) to be set 00:27:37.699 [2024-04-17 10:24:10.854006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.699 [2024-04-17 10:24:10.854224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.699 [2024-04-17 10:24:10.854238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x92c970 with addr=10.0.0.2, port=4420 00:27:37.699 [2024-04-17 10:24:10.854248] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92c970 is same with the state(5) to be set 00:27:37.699 [2024-04-17 10:24:10.854263] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x910c50 (9): Bad file descriptor 00:27:37.699 [2024-04-17 10:24:10.854329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.854344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.699 [2024-04-17 10:24:10.854360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.854370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.699 [2024-04-17 10:24:10.854382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.854394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.699 [2024-04-17 10:24:10.854406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.854416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.699 [2024-04-17 10:24:10.854428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.854439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.699 [2024-04-17 10:24:10.854452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.854462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.699 [2024-04-17 10:24:10.854474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.854484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.699 [2024-04-17 10:24:10.854496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.854506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.699 [2024-04-17 10:24:10.854518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.854528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.699 [2024-04-17 10:24:10.854541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.854551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.699 [2024-04-17 10:24:10.854564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.854577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.699 [2024-04-17 10:24:10.854589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.854599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.699 [2024-04-17 10:24:10.854613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.854623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.699 [2024-04-17 10:24:10.854636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.854654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.699 [2024-04-17 10:24:10.854667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.854677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.699 [2024-04-17 10:24:10.854690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.854700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.699 [2024-04-17 10:24:10.854713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.854723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.699 [2024-04-17 10:24:10.854737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.854746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.699 [2024-04-17 10:24:10.854759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.854770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.699 [2024-04-17 10:24:10.854782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.854791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.699 [2024-04-17 10:24:10.854803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.854814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.699 [2024-04-17 10:24:10.854826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-04-17 10:24:10.854836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.854848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.854857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.854872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.854882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.854894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.854904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.854916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.854926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.854938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.854947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.854959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.854969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.854982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.854991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.700 [2024-04-17 10:24:10.855691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.700 [2024-04-17 10:24:10.855703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.855719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.855731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.855741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.855753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.855765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.855777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.855787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.855797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9010c0 is same with the state(5) to be set 00:27:37.701 [2024-04-17 10:24:10.857253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.857979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.857991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.858002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.858014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.858026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.701 [2024-04-17 10:24:10.858038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.701 [2024-04-17 10:24:10.858048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.702 [2024-04-17 10:24:10.858703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.702 [2024-04-17 10:24:10.858713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.858724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5f560 is same with the state(5) to be set 00:27:37.703 [2024-04-17 10:24:10.860481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.860502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.860518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.860528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.860541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.860552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.860564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.860574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.860587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.860596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.860609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.860619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.860632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.860650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.860663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.860673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.860686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.860696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.860708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.860718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.860730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.860740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.860753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.860762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.860776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.860787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.860799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.860809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.860821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.860830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.860843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.860853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.860865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.860875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.860887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.860898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.860910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.860919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.860933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.860944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.860956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.860966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.860978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.860989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.861001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.861011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.861023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.861032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.861045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.861055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.861067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.861077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.861089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.861099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.861112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.861122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.861136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.861146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.861159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.861169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.861181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.861191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.861204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.861216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.861228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.861238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.861250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.861260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.861273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.861282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.861294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.861304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.861316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.861327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.703 [2024-04-17 10:24:10.861339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.703 [2024-04-17 10:24:10.861348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.861360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.861371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.861383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.861393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.861404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.861414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.861427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.861436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.861448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.861458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.861470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.861481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.861495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.861506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.861518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.861529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.861541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.861550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.861563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.861574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.861586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.861595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.861607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.861617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.861629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.861639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.861656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.861666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.861680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.861692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.861704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.861713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.861726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.861744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.861757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.861766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.861779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.861792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.861805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.861815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.861827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.861837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.861849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.861860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.861872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.861882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.861895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.861904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.861917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.861926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.861939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.861948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.861959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1979240 is same with the state(5) to be set 00:27:37.704 [2024-04-17 10:24:10.864492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.864518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.864536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.864547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.864560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.864570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.864582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.864593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.864606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.864620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.864632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.864649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.864662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.864672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.864684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.864693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.864706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.864716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.864729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.864739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.864751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.864761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.864773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.864785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.704 [2024-04-17 10:24:10.864798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.704 [2024-04-17 10:24:10.864808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.864820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.864830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.864842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.864852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.864864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.864874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.864887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.864897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.864911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.864921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.864934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.864944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.864956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.864966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.864979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.864988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.705 [2024-04-17 10:24:10.865687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.705 [2024-04-17 10:24:10.865697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.706 [2024-04-17 10:24:10.865710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.706 [2024-04-17 10:24:10.865720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.706 [2024-04-17 10:24:10.865733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.706 [2024-04-17 10:24:10.865743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.706 [2024-04-17 10:24:10.865755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.706 [2024-04-17 10:24:10.865767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.706 [2024-04-17 10:24:10.865779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.706 [2024-04-17 10:24:10.865789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.706 [2024-04-17 10:24:10.865802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.706 [2024-04-17 10:24:10.865812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.706 [2024-04-17 10:24:10.865824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.706 [2024-04-17 10:24:10.865834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.706 [2024-04-17 10:24:10.865846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.706 [2024-04-17 10:24:10.865856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.706 [2024-04-17 10:24:10.865868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.706 [2024-04-17 10:24:10.865879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.706 [2024-04-17 10:24:10.865892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.706 [2024-04-17 10:24:10.865901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.706 [2024-04-17 10:24:10.865914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.706 [2024-04-17 10:24:10.865924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.706 [2024-04-17 10:24:10.865936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.706 [2024-04-17 10:24:10.865946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.706 [2024-04-17 10:24:10.865959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.706 [2024-04-17 10:24:10.865968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.706 [2024-04-17 10:24:10.865979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1be40 is same with the state(5) to be set 00:27:37.706 [2024-04-17 10:24:10.867707] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:37.706 [2024-04-17 10:24:10.867731] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:37.706 [2024-04-17 10:24:10.867744] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:37.706 [2024-04-17 10:24:10.867757] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:37.706 [2024-04-17 10:24:10.868029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.706 [2024-04-17 10:24:10.868222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.706 [2024-04-17 10:24:10.868237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9324b0 with addr=10.0.0.2, port=4420 00:27:37.706 [2024-04-17 10:24:10.868252] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9324b0 is same with the state(5) to be set 00:27:37.706 [2024-04-17 10:24:10.868376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.706 [2024-04-17 10:24:10.868585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.706 [2024-04-17 10:24:10.868600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d5250 with addr=10.0.0.2, port=4420 00:27:37.706 [2024-04-17 10:24:10.868610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d5250 is same with the state(5) to be set 00:27:37.706 [2024-04-17 10:24:10.868625] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9139b0 (9): Bad file descriptor 00:27:37.706 [2024-04-17 10:24:10.868638] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x92c970 (9): Bad file descriptor 00:27:37.706 [2024-04-17 10:24:10.868659] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.706 [2024-04-17 10:24:10.868668] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.706 [2024-04-17 10:24:10.868679] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.706 [2024-04-17 10:24:10.868715] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:37.706 [2024-04-17 10:24:10.868739] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:37.706 [2024-04-17 10:24:10.868757] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:37.706 [2024-04-17 10:24:10.868771] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:37.706 [2024-04-17 10:24:10.868786] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d5250 (9): Bad file descriptor 00:27:37.706 [2024-04-17 10:24:10.868803] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9324b0 (9): Bad file descriptor 00:27:37.706 task offset: 24320 on job bdev=Nvme1n1 fails 00:27:37.706 00:27:37.706 Latency(us) 00:27:37.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.706 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.706 Job: Nvme1n1 ended in about 0.59 seconds with error 00:27:37.706 Verification LBA range: start 0x0 length 0x400 00:27:37.706 Nvme1n1 : 0.59 275.81 17.24 107.63 0.00 165187.30 82932.83 139174.63 00:27:37.706 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.706 Job: Nvme2n1 ended in about 0.61 seconds with error 00:27:37.706 Verification LBA range: start 0x0 length 0x400 00:27:37.706 Nvme2n1 : 0.61 206.01 12.88 104.64 0.00 200681.89 121062.87 177304.67 00:27:37.706 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.706 Job: Nvme3n1 ended in about 0.60 seconds with error 00:27:37.706 Verification LBA range: start 0x0 length 0x400 00:27:37.706 Nvme3n1 : 0.60 274.94 17.18 107.29 0.00 160235.00 75306.82 209715.20 00:27:37.706 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.706 Job: Nvme4n1 ended in about 0.60 seconds with error 00:27:37.706 Verification LBA range: start 0x0 length 0x400 00:27:37.706 Nvme4n1 : 0.60 274.10 17.13 106.97 0.00 158049.42 68634.07 135361.63 00:27:37.706 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.706 Job: Nvme5n1 ended in about 0.61 seconds with error 00:27:37.706 Verification LBA range: start 0x0 length 0x400 00:27:37.706 Nvme5n1 : 0.61 205.03 12.81 104.14 0.00 191871.85 118679.74 165865.66 00:27:37.706 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.706 Job: Nvme6n1 ended in about 0.60 seconds with error 00:27:37.706 Verification LBA range: start 0x0 length 0x400 00:27:37.706 Nvme6n1 : 0.60 273.23 17.08 106.63 0.00 153057.39 61484.68 135361.63 00:27:37.706 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.706 Job: Nvme7n1 ended in about 0.60 seconds with error 00:27:37.706 Verification LBA range: start 0x0 length 0x400 00:27:37.706 Nvme7n1 : 0.60 272.48 17.03 106.33 0.00 150872.34 54096.99 135361.63 00:27:37.706 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.706 Job: Nvme8n1 ended in about 0.61 seconds with error 00:27:37.706 Verification LBA range: start 0x0 length 0x400 00:27:37.706 Nvme8n1 : 0.61 270.92 16.93 105.73 0.00 149078.33 11736.90 174444.92 00:27:37.706 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.706 Job: Nvme9n1 ended in about 0.62 seconds with error 00:27:37.706 Verification LBA range: start 0x0 length 0x400 00:27:37.706 Nvme9n1 : 0.62 207.20 12.95 103.60 0.00 177981.44 13345.51 152520.15 00:27:37.706 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:37.706 Job: Nvme10n1 ended in about 0.62 seconds with error 00:27:37.706 Verification LBA range: start 0x0 length 0x400 00:27:37.706 Nvme10n1 : 0.62 202.65 12.67 102.93 0.00 177946.86 120586.24 142987.64 00:27:37.706 =================================================================================================================== 00:27:37.706 Total : 2462.38 153.90 1055.90 0.00 167176.03 11736.90 209715.20 00:27:37.706 [2024-04-17 10:24:10.898816] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:37.706 [2024-04-17 10:24:10.898862] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:37.706 [2024-04-17 10:24:10.898880] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.706 [2024-04-17 10:24:10.899175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.706 [2024-04-17 10:24:10.899376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.706 [2024-04-17 10:24:10.899394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x874520 with addr=10.0.0.2, port=4420 00:27:37.706 [2024-04-17 10:24:10.899407] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x874520 is same with the state(5) to be set 00:27:37.706 [2024-04-17 10:24:10.899494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.706 [2024-04-17 10:24:10.899629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.706 [2024-04-17 10:24:10.899651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x934b20 with addr=10.0.0.2, port=4420 00:27:37.707 [2024-04-17 10:24:10.899663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934b20 is same with the state(5) to be set 00:27:37.707 [2024-04-17 10:24:10.899961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.707 [2024-04-17 10:24:10.900132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.707 [2024-04-17 10:24:10.900148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d46a0 with addr=10.0.0.2, port=4420 00:27:37.707 [2024-04-17 10:24:10.900158] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d46a0 is same with the state(5) to be set 00:27:37.707 [2024-04-17 10:24:10.900349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.707 [2024-04-17 10:24:10.900469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.707 [2024-04-17 10:24:10.900484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a4b80 with addr=10.0.0.2, port=4420 00:27:37.707 [2024-04-17 10:24:10.900495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a4b80 is same with the state(5) to be set 00:27:37.707 [2024-04-17 10:24:10.900510] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:37.707 [2024-04-17 10:24:10.900519] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:37.707 [2024-04-17 10:24:10.900537] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:37.707 [2024-04-17 10:24:10.900554] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:37.707 [2024-04-17 10:24:10.900564] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:37.707 [2024-04-17 10:24:10.900573] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:37.707 [2024-04-17 10:24:10.901913] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.707 [2024-04-17 10:24:10.901932] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.707 [2024-04-17 10:24:10.902263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.707 [2024-04-17 10:24:10.902528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.707 [2024-04-17 10:24:10.902545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d4e20 with addr=10.0.0.2, port=4420 00:27:37.707 [2024-04-17 10:24:10.902557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4e20 is same with the state(5) to be set 00:27:37.707 [2024-04-17 10:24:10.902574] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x874520 (9): Bad file descriptor 00:27:37.707 [2024-04-17 10:24:10.902588] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x934b20 (9): Bad file descriptor 00:27:37.707 [2024-04-17 10:24:10.902601] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d46a0 (9): Bad file descriptor 00:27:37.707 [2024-04-17 10:24:10.902614] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a4b80 (9): Bad file descriptor 00:27:37.707 [2024-04-17 10:24:10.902626] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:37.707 [2024-04-17 10:24:10.902635] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:37.707 [2024-04-17 10:24:10.902651] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:37.707 [2024-04-17 10:24:10.902668] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:37.707 [2024-04-17 10:24:10.902676] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:37.707 [2024-04-17 10:24:10.902686] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:37.707 [2024-04-17 10:24:10.902746] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:37.707 [2024-04-17 10:24:10.902762] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:37.707 [2024-04-17 10:24:10.902776] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:37.707 [2024-04-17 10:24:10.902790] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:37.707 [2024-04-17 10:24:10.902804] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:37.707 [2024-04-17 10:24:10.902816] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:37.707 [2024-04-17 10:24:10.902890] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.707 [2024-04-17 10:24:10.902903] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.707 [2024-04-17 10:24:10.902927] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d4e20 (9): Bad file descriptor 00:27:37.707 [2024-04-17 10:24:10.902939] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:37.707 [2024-04-17 10:24:10.902952] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:37.707 [2024-04-17 10:24:10.902962] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:37.707 [2024-04-17 10:24:10.902975] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:37.707 [2024-04-17 10:24:10.902984] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:37.707 [2024-04-17 10:24:10.902992] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:37.707 [2024-04-17 10:24:10.903005] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:37.707 [2024-04-17 10:24:10.903014] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:37.707 [2024-04-17 10:24:10.903024] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:37.707 [2024-04-17 10:24:10.903037] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:37.707 [2024-04-17 10:24:10.903046] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:37.707 [2024-04-17 10:24:10.903055] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:37.707 [2024-04-17 10:24:10.903124] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.707 [2024-04-17 10:24:10.903140] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:37.707 [2024-04-17 10:24:10.903152] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:37.707 [2024-04-17 10:24:10.903163] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.707 [2024-04-17 10:24:10.903172] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.707 [2024-04-17 10:24:10.903181] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.707 [2024-04-17 10:24:10.903190] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.707 [2024-04-17 10:24:10.903217] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:37.707 [2024-04-17 10:24:10.903227] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:37.707 [2024-04-17 10:24:10.903236] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:37.707 [2024-04-17 10:24:10.903272] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.707 [2024-04-17 10:24:10.903448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.707 [2024-04-17 10:24:10.903711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.707 [2024-04-17 10:24:10.903729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x910c50 with addr=10.0.0.2, port=4420 00:27:37.707 [2024-04-17 10:24:10.903740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x910c50 is same with the state(5) to be set 00:27:37.707 [2024-04-17 10:24:10.903925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.707 [2024-04-17 10:24:10.904170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.707 [2024-04-17 10:24:10.904187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x92c970 with addr=10.0.0.2, port=4420 00:27:37.707 [2024-04-17 10:24:10.904198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92c970 is same with the state(5) to be set 00:27:37.707 [2024-04-17 10:24:10.904384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.707 [2024-04-17 10:24:10.904656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.707 [2024-04-17 10:24:10.904672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9139b0 with addr=10.0.0.2, port=4420 00:27:37.707 [2024-04-17 10:24:10.904682] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9139b0 is same with the state(5) to be set 00:27:37.708 [2024-04-17 10:24:10.904719] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x910c50 (9): Bad file descriptor 00:27:37.708 [2024-04-17 10:24:10.904733] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x92c970 (9): Bad file descriptor 00:27:37.708 [2024-04-17 10:24:10.904746] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9139b0 (9): Bad file descriptor 00:27:37.708 [2024-04-17 10:24:10.904781] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.708 [2024-04-17 10:24:10.904792] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.708 [2024-04-17 10:24:10.904802] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.708 [2024-04-17 10:24:10.904814] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:37.708 [2024-04-17 10:24:10.904823] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:37.708 [2024-04-17 10:24:10.904833] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:37.708 [2024-04-17 10:24:10.904845] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:37.708 [2024-04-17 10:24:10.904854] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:37.708 [2024-04-17 10:24:10.904865] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:37.708 [2024-04-17 10:24:10.904898] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.708 [2024-04-17 10:24:10.904910] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.708 [2024-04-17 10:24:10.904918] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.967 10:24:11 -- target/shutdown.sh@135 -- # nvmfpid= 00:27:37.967 10:24:11 -- target/shutdown.sh@138 -- # sleep 1 00:27:38.904 10:24:12 -- target/shutdown.sh@141 -- # kill -9 3568547 00:27:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (3568547) - No such process 00:27:38.904 10:24:12 -- target/shutdown.sh@141 -- # true 00:27:38.904 10:24:12 -- target/shutdown.sh@143 -- # stoptarget 00:27:38.904 10:24:12 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:38.904 10:24:12 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:38.904 10:24:12 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:38.904 10:24:12 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:38.904 10:24:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:38.904 10:24:12 -- nvmf/common.sh@116 -- # sync 00:27:38.904 10:24:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:38.904 10:24:12 -- nvmf/common.sh@119 -- # set +e 00:27:38.904 10:24:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:38.904 10:24:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:38.904 rmmod nvme_tcp 00:27:39.161 rmmod nvme_fabrics 00:27:39.161 rmmod nvme_keyring 00:27:39.161 10:24:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:39.161 10:24:12 -- nvmf/common.sh@123 -- # set -e 00:27:39.161 10:24:12 -- nvmf/common.sh@124 -- # return 0 00:27:39.161 10:24:12 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:27:39.161 10:24:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:39.161 10:24:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:39.161 10:24:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:39.161 10:24:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:39.161 10:24:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:39.161 10:24:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.161 10:24:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:39.161 10:24:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.071 10:24:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:41.071 00:27:41.071 real 0m7.668s 00:27:41.071 user 0m18.743s 00:27:41.072 sys 0m1.269s 00:27:41.072 10:24:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:41.072 10:24:14 -- common/autotest_common.sh@10 -- # set +x 00:27:41.072 ************************************ 00:27:41.072 END TEST nvmf_shutdown_tc3 00:27:41.072 ************************************ 00:27:41.072 10:24:14 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:27:41.072 00:27:41.072 real 0m31.434s 00:27:41.072 user 1m19.934s 00:27:41.072 sys 0m8.484s 00:27:41.072 10:24:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:41.072 10:24:14 -- common/autotest_common.sh@10 -- # set +x 00:27:41.072 ************************************ 00:27:41.072 END TEST nvmf_shutdown 00:27:41.072 ************************************ 00:27:41.339 10:24:14 -- nvmf/nvmf.sh@85 -- # timing_exit target 00:27:41.339 10:24:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:41.339 10:24:14 -- common/autotest_common.sh@10 -- # set +x 00:27:41.339 10:24:14 -- nvmf/nvmf.sh@87 -- # timing_enter host 00:27:41.339 10:24:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:41.339 10:24:14 -- common/autotest_common.sh@10 -- # set +x 00:27:41.339 10:24:14 -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:27:41.339 10:24:14 -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:41.339 10:24:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:41.339 10:24:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:41.339 10:24:14 -- common/autotest_common.sh@10 -- # set +x 00:27:41.339 ************************************ 00:27:41.339 START TEST nvmf_multicontroller 00:27:41.339 ************************************ 00:27:41.339 10:24:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:41.339 * Looking for test storage... 00:27:41.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:41.339 10:24:14 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:41.339 10:24:14 -- nvmf/common.sh@7 -- # uname -s 00:27:41.339 10:24:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:41.339 10:24:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:41.339 10:24:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:41.339 10:24:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:41.340 10:24:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:41.340 10:24:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:41.340 10:24:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:41.340 10:24:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:41.340 10:24:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:41.340 10:24:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:41.340 10:24:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:41.340 10:24:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:41.340 10:24:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:41.340 10:24:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:41.340 10:24:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:41.340 10:24:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:41.340 10:24:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:41.340 10:24:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:41.340 10:24:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:41.340 10:24:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.340 10:24:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.340 10:24:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.340 10:24:14 -- paths/export.sh@5 -- # export PATH 00:27:41.340 10:24:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.340 10:24:14 -- nvmf/common.sh@46 -- # : 0 00:27:41.340 10:24:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:41.340 10:24:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:41.340 10:24:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:41.340 10:24:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:41.340 10:24:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:41.340 10:24:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:41.340 10:24:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:41.340 10:24:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:41.340 10:24:14 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:41.340 10:24:14 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:41.340 10:24:14 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:41.340 10:24:14 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:41.340 10:24:14 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:41.340 10:24:14 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:41.340 10:24:14 -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:41.340 10:24:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:41.340 10:24:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:41.340 10:24:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:41.340 10:24:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:41.340 10:24:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:41.340 10:24:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.340 10:24:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:41.340 10:24:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.340 10:24:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:41.340 10:24:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:41.340 10:24:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:41.340 10:24:14 -- common/autotest_common.sh@10 -- # set +x 00:27:46.608 10:24:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:46.608 10:24:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:46.608 10:24:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:46.608 10:24:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:46.608 10:24:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:46.608 10:24:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:46.608 10:24:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:46.608 10:24:19 -- nvmf/common.sh@294 -- # net_devs=() 00:27:46.608 10:24:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:46.608 10:24:19 -- nvmf/common.sh@295 -- # e810=() 00:27:46.608 10:24:19 -- nvmf/common.sh@295 -- # local -ga e810 00:27:46.608 10:24:19 -- nvmf/common.sh@296 -- # x722=() 00:27:46.608 10:24:19 -- nvmf/common.sh@296 -- # local -ga x722 00:27:46.608 10:24:19 -- nvmf/common.sh@297 -- # mlx=() 00:27:46.608 10:24:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:46.608 10:24:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:46.608 10:24:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:46.608 10:24:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:46.608 10:24:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:46.608 10:24:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:46.608 10:24:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:46.609 10:24:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:46.609 10:24:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:46.609 10:24:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:46.609 10:24:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:46.609 10:24:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:46.609 10:24:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:46.609 10:24:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:46.609 10:24:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:46.609 10:24:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:46.867 10:24:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:46.867 10:24:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:46.867 10:24:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:46.867 10:24:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:46.867 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:46.867 10:24:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:46.867 10:24:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:46.867 10:24:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.867 10:24:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.867 10:24:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:46.867 10:24:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:46.867 10:24:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:46.867 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:46.867 10:24:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:46.867 10:24:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:46.867 10:24:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.868 10:24:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.868 10:24:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:46.868 10:24:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:46.868 10:24:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:46.868 10:24:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:46.868 10:24:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:46.868 10:24:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.868 10:24:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:46.868 10:24:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.868 10:24:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:46.868 Found net devices under 0000:af:00.0: cvl_0_0 00:27:46.868 10:24:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.868 10:24:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:46.868 10:24:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.868 10:24:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:46.868 10:24:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.868 10:24:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:46.868 Found net devices under 0000:af:00.1: cvl_0_1 00:27:46.868 10:24:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.868 10:24:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:46.868 10:24:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:46.868 10:24:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:46.868 10:24:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:46.868 10:24:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:46.868 10:24:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:46.868 10:24:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:46.868 10:24:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:46.868 10:24:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:46.868 10:24:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:46.868 10:24:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:46.868 10:24:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:46.868 10:24:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:46.868 10:24:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:46.868 10:24:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:46.868 10:24:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:46.868 10:24:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:46.868 10:24:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:46.868 10:24:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:46.868 10:24:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:46.868 10:24:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:46.868 10:24:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:46.868 10:24:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:46.868 10:24:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:47.126 10:24:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:47.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:47.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:27:47.126 00:27:47.126 --- 10.0.0.2 ping statistics --- 00:27:47.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.126 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:27:47.126 10:24:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:47.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:47.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:27:47.126 00:27:47.126 --- 10.0.0.1 ping statistics --- 00:27:47.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.126 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:27:47.126 10:24:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:47.127 10:24:20 -- nvmf/common.sh@410 -- # return 0 00:27:47.127 10:24:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:47.127 10:24:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:47.127 10:24:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:47.127 10:24:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:47.127 10:24:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:47.127 10:24:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:47.127 10:24:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:47.127 10:24:20 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:47.127 10:24:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:47.127 10:24:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:47.127 10:24:20 -- common/autotest_common.sh@10 -- # set +x 00:27:47.127 10:24:20 -- nvmf/common.sh@469 -- # nvmfpid=3573268 00:27:47.127 10:24:20 -- nvmf/common.sh@470 -- # waitforlisten 3573268 00:27:47.127 10:24:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:47.127 10:24:20 -- common/autotest_common.sh@819 -- # '[' -z 3573268 ']' 00:27:47.127 10:24:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.127 10:24:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:47.127 10:24:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.127 10:24:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:47.127 10:24:20 -- common/autotest_common.sh@10 -- # set +x 00:27:47.127 [2024-04-17 10:24:20.301738] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:47.127 [2024-04-17 10:24:20.301794] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:47.127 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.127 [2024-04-17 10:24:20.380688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:47.386 [2024-04-17 10:24:20.470079] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:47.386 [2024-04-17 10:24:20.470219] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.386 [2024-04-17 10:24:20.470231] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.386 [2024-04-17 10:24:20.470243] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.386 [2024-04-17 10:24:20.470350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:47.386 [2024-04-17 10:24:20.470461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:47.386 [2024-04-17 10:24:20.470461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.953 10:24:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:47.953 10:24:21 -- common/autotest_common.sh@852 -- # return 0 00:27:47.953 10:24:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:47.953 10:24:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:47.953 10:24:21 -- common/autotest_common.sh@10 -- # set +x 00:27:47.953 10:24:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:47.953 10:24:21 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:47.953 10:24:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:47.953 10:24:21 -- common/autotest_common.sh@10 -- # set +x 00:27:47.953 [2024-04-17 10:24:21.280409] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:48.212 10:24:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.212 10:24:21 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:48.212 10:24:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.212 10:24:21 -- common/autotest_common.sh@10 -- # set +x 00:27:48.212 Malloc0 00:27:48.212 10:24:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.212 10:24:21 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:48.212 10:24:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.212 10:24:21 -- common/autotest_common.sh@10 -- # set +x 00:27:48.212 10:24:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.212 10:24:21 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:48.212 10:24:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.212 10:24:21 -- common/autotest_common.sh@10 -- # set +x 00:27:48.212 10:24:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.212 10:24:21 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:48.212 10:24:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.212 10:24:21 -- common/autotest_common.sh@10 -- # set +x 00:27:48.212 [2024-04-17 10:24:21.347390] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:48.212 10:24:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.212 10:24:21 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:48.212 10:24:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.212 10:24:21 -- common/autotest_common.sh@10 -- # set +x 00:27:48.212 [2024-04-17 10:24:21.355337] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:48.212 10:24:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.212 10:24:21 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:48.212 10:24:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.212 10:24:21 -- common/autotest_common.sh@10 -- # set +x 00:27:48.212 Malloc1 00:27:48.212 10:24:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.212 10:24:21 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:48.212 10:24:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.212 10:24:21 -- common/autotest_common.sh@10 -- # set +x 00:27:48.212 10:24:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.212 10:24:21 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:48.212 10:24:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.212 10:24:21 -- common/autotest_common.sh@10 -- # set +x 00:27:48.213 10:24:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.213 10:24:21 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:48.213 10:24:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.213 10:24:21 -- common/autotest_common.sh@10 -- # set +x 00:27:48.213 10:24:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.213 10:24:21 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:48.213 10:24:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.213 10:24:21 -- common/autotest_common.sh@10 -- # set +x 00:27:48.213 10:24:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.213 10:24:21 -- host/multicontroller.sh@44 -- # bdevperf_pid=3573548 00:27:48.213 10:24:21 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:48.213 10:24:21 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:48.213 10:24:21 -- host/multicontroller.sh@47 -- # waitforlisten 3573548 /var/tmp/bdevperf.sock 00:27:48.213 10:24:21 -- common/autotest_common.sh@819 -- # '[' -z 3573548 ']' 00:27:48.213 10:24:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:48.213 10:24:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:48.213 10:24:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:48.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:48.213 10:24:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:48.213 10:24:21 -- common/autotest_common.sh@10 -- # set +x 00:27:49.149 10:24:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:49.149 10:24:22 -- common/autotest_common.sh@852 -- # return 0 00:27:49.149 10:24:22 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:49.149 10:24:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:49.149 10:24:22 -- common/autotest_common.sh@10 -- # set +x 00:27:49.409 NVMe0n1 00:27:49.409 10:24:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:49.409 10:24:22 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:49.409 10:24:22 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:49.409 10:24:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:49.409 10:24:22 -- common/autotest_common.sh@10 -- # set +x 00:27:49.409 10:24:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:49.409 1 00:27:49.409 10:24:22 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:49.409 10:24:22 -- common/autotest_common.sh@640 -- # local es=0 00:27:49.409 10:24:22 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:49.409 10:24:22 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:49.409 10:24:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:49.409 10:24:22 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:49.409 10:24:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:49.409 10:24:22 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:49.409 10:24:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:49.409 10:24:22 -- common/autotest_common.sh@10 -- # set +x 00:27:49.409 request: 00:27:49.409 { 00:27:49.409 "name": "NVMe0", 00:27:49.409 "trtype": "tcp", 00:27:49.409 "traddr": "10.0.0.2", 00:27:49.409 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:49.409 "hostaddr": "10.0.0.2", 00:27:49.409 "hostsvcid": "60000", 00:27:49.409 "adrfam": "ipv4", 00:27:49.409 "trsvcid": "4420", 00:27:49.409 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:49.409 "method": "bdev_nvme_attach_controller", 00:27:49.409 "req_id": 1 00:27:49.409 } 00:27:49.410 Got JSON-RPC error response 00:27:49.410 response: 00:27:49.410 { 00:27:49.410 "code": -114, 00:27:49.410 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:49.410 } 00:27:49.410 10:24:22 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:49.410 10:24:22 -- common/autotest_common.sh@643 -- # es=1 00:27:49.410 10:24:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:49.410 10:24:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:49.410 10:24:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:49.410 10:24:22 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:49.410 10:24:22 -- common/autotest_common.sh@640 -- # local es=0 00:27:49.410 10:24:22 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:49.410 10:24:22 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:49.410 10:24:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:49.410 10:24:22 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:49.410 10:24:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:49.410 10:24:22 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:49.410 10:24:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:49.410 10:24:22 -- common/autotest_common.sh@10 -- # set +x 00:27:49.410 request: 00:27:49.410 { 00:27:49.410 "name": "NVMe0", 00:27:49.410 "trtype": "tcp", 00:27:49.410 "traddr": "10.0.0.2", 00:27:49.410 "hostaddr": "10.0.0.2", 00:27:49.410 "hostsvcid": "60000", 00:27:49.410 "adrfam": "ipv4", 00:27:49.410 "trsvcid": "4420", 00:27:49.410 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:49.410 "method": "bdev_nvme_attach_controller", 00:27:49.410 "req_id": 1 00:27:49.410 } 00:27:49.410 Got JSON-RPC error response 00:27:49.410 response: 00:27:49.410 { 00:27:49.410 "code": -114, 00:27:49.410 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:49.410 } 00:27:49.410 10:24:22 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:49.410 10:24:22 -- common/autotest_common.sh@643 -- # es=1 00:27:49.410 10:24:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:49.410 10:24:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:49.410 10:24:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:49.410 10:24:22 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:49.410 10:24:22 -- common/autotest_common.sh@640 -- # local es=0 00:27:49.410 10:24:22 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:49.410 10:24:22 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:49.410 10:24:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:49.410 10:24:22 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:49.410 10:24:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:49.410 10:24:22 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:49.410 10:24:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:49.410 10:24:22 -- common/autotest_common.sh@10 -- # set +x 00:27:49.410 request: 00:27:49.410 { 00:27:49.410 "name": "NVMe0", 00:27:49.410 "trtype": "tcp", 00:27:49.410 "traddr": "10.0.0.2", 00:27:49.410 "hostaddr": "10.0.0.2", 00:27:49.410 "hostsvcid": "60000", 00:27:49.410 "adrfam": "ipv4", 00:27:49.410 "trsvcid": "4420", 00:27:49.410 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:49.410 "multipath": "disable", 00:27:49.410 "method": "bdev_nvme_attach_controller", 00:27:49.410 "req_id": 1 00:27:49.410 } 00:27:49.410 Got JSON-RPC error response 00:27:49.410 response: 00:27:49.410 { 00:27:49.410 "code": -114, 00:27:49.410 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:49.410 } 00:27:49.410 10:24:22 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:49.410 10:24:22 -- common/autotest_common.sh@643 -- # es=1 00:27:49.410 10:24:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:49.410 10:24:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:49.410 10:24:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:49.410 10:24:22 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:49.410 10:24:22 -- common/autotest_common.sh@640 -- # local es=0 00:27:49.410 10:24:22 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:49.410 10:24:22 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:49.410 10:24:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:49.410 10:24:22 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:49.410 10:24:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:49.410 10:24:22 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:49.410 10:24:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:49.410 10:24:22 -- common/autotest_common.sh@10 -- # set +x 00:27:49.410 request: 00:27:49.410 { 00:27:49.410 "name": "NVMe0", 00:27:49.410 "trtype": "tcp", 00:27:49.410 "traddr": "10.0.0.2", 00:27:49.410 "hostaddr": "10.0.0.2", 00:27:49.410 "hostsvcid": "60000", 00:27:49.410 "adrfam": "ipv4", 00:27:49.410 "trsvcid": "4420", 00:27:49.410 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:49.410 "multipath": "failover", 00:27:49.410 "method": "bdev_nvme_attach_controller", 00:27:49.410 "req_id": 1 00:27:49.410 } 00:27:49.410 Got JSON-RPC error response 00:27:49.410 response: 00:27:49.410 { 00:27:49.410 "code": -114, 00:27:49.410 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:49.410 } 00:27:49.410 10:24:22 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:49.410 10:24:22 -- common/autotest_common.sh@643 -- # es=1 00:27:49.410 10:24:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:49.410 10:24:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:49.410 10:24:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:49.410 10:24:22 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:49.410 10:24:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:49.410 10:24:22 -- common/autotest_common.sh@10 -- # set +x 00:27:49.670 00:27:49.670 10:24:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:49.670 10:24:22 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:49.670 10:24:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:49.670 10:24:22 -- common/autotest_common.sh@10 -- # set +x 00:27:49.670 10:24:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:49.670 10:24:22 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:49.670 10:24:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:49.670 10:24:22 -- common/autotest_common.sh@10 -- # set +x 00:27:49.670 00:27:49.670 10:24:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:49.670 10:24:22 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:49.670 10:24:22 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:49.670 10:24:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:49.670 10:24:22 -- common/autotest_common.sh@10 -- # set +x 00:27:49.670 10:24:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:49.670 10:24:22 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:49.670 10:24:22 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:51.050 0 00:27:51.050 10:24:24 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:51.050 10:24:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:51.050 10:24:24 -- common/autotest_common.sh@10 -- # set +x 00:27:51.050 10:24:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:51.050 10:24:24 -- host/multicontroller.sh@100 -- # killprocess 3573548 00:27:51.050 10:24:24 -- common/autotest_common.sh@926 -- # '[' -z 3573548 ']' 00:27:51.050 10:24:24 -- common/autotest_common.sh@930 -- # kill -0 3573548 00:27:51.050 10:24:24 -- common/autotest_common.sh@931 -- # uname 00:27:51.050 10:24:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:51.050 10:24:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3573548 00:27:51.050 10:24:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:51.050 10:24:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:51.050 10:24:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3573548' 00:27:51.050 killing process with pid 3573548 00:27:51.050 10:24:24 -- common/autotest_common.sh@945 -- # kill 3573548 00:27:51.050 10:24:24 -- common/autotest_common.sh@950 -- # wait 3573548 00:27:51.326 10:24:24 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:51.326 10:24:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:51.326 10:24:24 -- common/autotest_common.sh@10 -- # set +x 00:27:51.326 10:24:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:51.326 10:24:24 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:51.326 10:24:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:51.326 10:24:24 -- common/autotest_common.sh@10 -- # set +x 00:27:51.326 10:24:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:51.326 10:24:24 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:51.326 10:24:24 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:51.326 10:24:24 -- common/autotest_common.sh@1597 -- # read -r file 00:27:51.326 10:24:24 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:51.326 10:24:24 -- common/autotest_common.sh@1596 -- # sort -u 00:27:51.326 10:24:24 -- common/autotest_common.sh@1598 -- # cat 00:27:51.326 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:51.326 [2024-04-17 10:24:21.458299] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:51.326 [2024-04-17 10:24:21.458365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3573548 ] 00:27:51.326 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.326 [2024-04-17 10:24:21.538000] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.326 [2024-04-17 10:24:21.620542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.326 [2024-04-17 10:24:22.943877] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name ef267c3d-48af-4ac0-ae1d-16ffbbc1dea2 already exists 00:27:51.326 [2024-04-17 10:24:22.943913] bdev.c:7598:bdev_register: *ERROR*: Unable to add uuid:ef267c3d-48af-4ac0-ae1d-16ffbbc1dea2 alias for bdev NVMe1n1 00:27:51.326 [2024-04-17 10:24:22.943925] bdev_nvme.c:4230:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:51.326 Running I/O for 1 seconds... 00:27:51.326 00:27:51.326 Latency(us) 00:27:51.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.326 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:51.326 NVMe0n1 : 1.01 16434.35 64.20 0.00 0.00 7750.72 7179.17 14834.97 00:27:51.326 =================================================================================================================== 00:27:51.326 Total : 16434.35 64.20 0.00 0.00 7750.72 7179.17 14834.97 00:27:51.326 Received shutdown signal, test time was about 1.000000 seconds 00:27:51.326 00:27:51.326 Latency(us) 00:27:51.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.326 =================================================================================================================== 00:27:51.326 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:51.326 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:51.326 10:24:24 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:51.326 10:24:24 -- common/autotest_common.sh@1597 -- # read -r file 00:27:51.326 10:24:24 -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:51.326 10:24:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:51.326 10:24:24 -- nvmf/common.sh@116 -- # sync 00:27:51.326 10:24:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:51.326 10:24:24 -- nvmf/common.sh@119 -- # set +e 00:27:51.326 10:24:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:51.326 10:24:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:51.326 rmmod nvme_tcp 00:27:51.326 rmmod nvme_fabrics 00:27:51.326 rmmod nvme_keyring 00:27:51.326 10:24:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:51.326 10:24:24 -- nvmf/common.sh@123 -- # set -e 00:27:51.326 10:24:24 -- nvmf/common.sh@124 -- # return 0 00:27:51.326 10:24:24 -- nvmf/common.sh@477 -- # '[' -n 3573268 ']' 00:27:51.326 10:24:24 -- nvmf/common.sh@478 -- # killprocess 3573268 00:27:51.326 10:24:24 -- common/autotest_common.sh@926 -- # '[' -z 3573268 ']' 00:27:51.326 10:24:24 -- common/autotest_common.sh@930 -- # kill -0 3573268 00:27:51.326 10:24:24 -- common/autotest_common.sh@931 -- # uname 00:27:51.326 10:24:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:51.326 10:24:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3573268 00:27:51.326 10:24:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:51.326 10:24:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:51.326 10:24:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3573268' 00:27:51.326 killing process with pid 3573268 00:27:51.326 10:24:24 -- common/autotest_common.sh@945 -- # kill 3573268 00:27:51.326 10:24:24 -- common/autotest_common.sh@950 -- # wait 3573268 00:27:51.586 10:24:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:51.586 10:24:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:51.586 10:24:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:51.586 10:24:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:51.586 10:24:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:51.586 10:24:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.586 10:24:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:51.586 10:24:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.121 10:24:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:54.121 00:27:54.121 real 0m12.406s 00:27:54.121 user 0m17.982s 00:27:54.121 sys 0m5.058s 00:27:54.121 10:24:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:54.121 10:24:26 -- common/autotest_common.sh@10 -- # set +x 00:27:54.121 ************************************ 00:27:54.121 END TEST nvmf_multicontroller 00:27:54.121 ************************************ 00:27:54.122 10:24:26 -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:54.122 10:24:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:54.122 10:24:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:54.122 10:24:26 -- common/autotest_common.sh@10 -- # set +x 00:27:54.122 ************************************ 00:27:54.122 START TEST nvmf_aer 00:27:54.122 ************************************ 00:27:54.122 10:24:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:54.122 * Looking for test storage... 00:27:54.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:54.122 10:24:27 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:54.122 10:24:27 -- nvmf/common.sh@7 -- # uname -s 00:27:54.122 10:24:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:54.122 10:24:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:54.122 10:24:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:54.122 10:24:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:54.122 10:24:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:54.122 10:24:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:54.122 10:24:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:54.122 10:24:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:54.122 10:24:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:54.122 10:24:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:54.122 10:24:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:54.122 10:24:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:54.122 10:24:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:54.122 10:24:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:54.122 10:24:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:54.122 10:24:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:54.122 10:24:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:54.122 10:24:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:54.122 10:24:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:54.122 10:24:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.122 10:24:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.122 10:24:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.122 10:24:27 -- paths/export.sh@5 -- # export PATH 00:27:54.122 10:24:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.122 10:24:27 -- nvmf/common.sh@46 -- # : 0 00:27:54.122 10:24:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:54.122 10:24:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:54.122 10:24:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:54.122 10:24:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:54.122 10:24:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:54.122 10:24:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:54.122 10:24:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:54.122 10:24:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:54.122 10:24:27 -- host/aer.sh@11 -- # nvmftestinit 00:27:54.122 10:24:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:54.122 10:24:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:54.122 10:24:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:54.122 10:24:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:54.122 10:24:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:54.122 10:24:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.122 10:24:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:54.122 10:24:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.122 10:24:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:54.122 10:24:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:54.122 10:24:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:54.122 10:24:27 -- common/autotest_common.sh@10 -- # set +x 00:27:59.398 10:24:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:59.398 10:24:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:59.398 10:24:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:59.398 10:24:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:59.398 10:24:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:59.398 10:24:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:59.398 10:24:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:59.398 10:24:32 -- nvmf/common.sh@294 -- # net_devs=() 00:27:59.398 10:24:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:59.398 10:24:32 -- nvmf/common.sh@295 -- # e810=() 00:27:59.398 10:24:32 -- nvmf/common.sh@295 -- # local -ga e810 00:27:59.398 10:24:32 -- nvmf/common.sh@296 -- # x722=() 00:27:59.398 10:24:32 -- nvmf/common.sh@296 -- # local -ga x722 00:27:59.398 10:24:32 -- nvmf/common.sh@297 -- # mlx=() 00:27:59.398 10:24:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:59.398 10:24:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:59.398 10:24:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:59.398 10:24:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:59.398 10:24:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:59.398 10:24:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:59.398 10:24:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:59.398 10:24:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:59.398 10:24:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:59.398 10:24:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:59.398 10:24:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:59.398 10:24:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:59.398 10:24:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:59.398 10:24:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:59.398 10:24:32 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:59.398 10:24:32 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:59.398 10:24:32 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:59.398 10:24:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:59.398 10:24:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:59.398 10:24:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:59.398 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:59.398 10:24:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:59.398 10:24:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:59.398 10:24:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.398 10:24:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.398 10:24:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:59.398 10:24:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:59.398 10:24:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:59.398 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:59.398 10:24:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:59.398 10:24:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:59.398 10:24:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.398 10:24:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.398 10:24:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:59.398 10:24:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:59.398 10:24:32 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:59.398 10:24:32 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:59.398 10:24:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:59.398 10:24:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.398 10:24:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:59.398 10:24:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.398 10:24:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:59.398 Found net devices under 0000:af:00.0: cvl_0_0 00:27:59.398 10:24:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.398 10:24:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:59.398 10:24:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.398 10:24:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:59.398 10:24:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.398 10:24:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:59.398 Found net devices under 0000:af:00.1: cvl_0_1 00:27:59.398 10:24:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.398 10:24:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:59.398 10:24:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:59.398 10:24:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:59.398 10:24:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:59.398 10:24:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:59.398 10:24:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:59.398 10:24:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:59.398 10:24:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:59.398 10:24:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:59.398 10:24:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:59.398 10:24:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:59.398 10:24:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:59.398 10:24:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:59.398 10:24:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:59.398 10:24:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:59.398 10:24:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:59.398 10:24:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:59.398 10:24:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:59.398 10:24:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:59.398 10:24:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:59.398 10:24:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:59.398 10:24:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:59.398 10:24:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:59.398 10:24:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:59.398 10:24:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:59.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:59.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:27:59.398 00:27:59.398 --- 10.0.0.2 ping statistics --- 00:27:59.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.398 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:27:59.398 10:24:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:59.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:59.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:27:59.398 00:27:59.398 --- 10.0.0.1 ping statistics --- 00:27:59.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.398 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:27:59.398 10:24:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:59.398 10:24:32 -- nvmf/common.sh@410 -- # return 0 00:27:59.398 10:24:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:59.398 10:24:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:59.398 10:24:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:59.398 10:24:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:59.398 10:24:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:59.399 10:24:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:59.399 10:24:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:59.399 10:24:32 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:59.399 10:24:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:59.399 10:24:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:59.399 10:24:32 -- common/autotest_common.sh@10 -- # set +x 00:27:59.399 10:24:32 -- nvmf/common.sh@469 -- # nvmfpid=3577584 00:27:59.399 10:24:32 -- nvmf/common.sh@470 -- # waitforlisten 3577584 00:27:59.399 10:24:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:59.399 10:24:32 -- common/autotest_common.sh@819 -- # '[' -z 3577584 ']' 00:27:59.399 10:24:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.399 10:24:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:59.399 10:24:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.399 10:24:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:59.399 10:24:32 -- common/autotest_common.sh@10 -- # set +x 00:27:59.399 [2024-04-17 10:24:32.405878] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:59.399 [2024-04-17 10:24:32.405931] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:59.399 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.399 [2024-04-17 10:24:32.490904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:59.399 [2024-04-17 10:24:32.580313] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:59.399 [2024-04-17 10:24:32.580461] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:59.399 [2024-04-17 10:24:32.580473] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:59.399 [2024-04-17 10:24:32.580481] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:59.399 [2024-04-17 10:24:32.580531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.399 [2024-04-17 10:24:32.580634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:59.399 [2024-04-17 10:24:32.580742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:59.399 [2024-04-17 10:24:32.580744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.367 10:24:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:00.367 10:24:33 -- common/autotest_common.sh@852 -- # return 0 00:28:00.367 10:24:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:00.367 10:24:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:00.367 10:24:33 -- common/autotest_common.sh@10 -- # set +x 00:28:00.367 10:24:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:00.367 10:24:33 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:00.367 10:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:00.367 10:24:33 -- common/autotest_common.sh@10 -- # set +x 00:28:00.367 [2024-04-17 10:24:33.389473] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:00.367 10:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:00.367 10:24:33 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:00.367 10:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:00.367 10:24:33 -- common/autotest_common.sh@10 -- # set +x 00:28:00.367 Malloc0 00:28:00.367 10:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:00.367 10:24:33 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:00.367 10:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:00.367 10:24:33 -- common/autotest_common.sh@10 -- # set +x 00:28:00.367 10:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:00.367 10:24:33 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:00.367 10:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:00.367 10:24:33 -- common/autotest_common.sh@10 -- # set +x 00:28:00.367 10:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:00.367 10:24:33 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:00.367 10:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:00.367 10:24:33 -- common/autotest_common.sh@10 -- # set +x 00:28:00.367 [2024-04-17 10:24:33.445206] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:00.367 10:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:00.367 10:24:33 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:00.367 10:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:00.367 10:24:33 -- common/autotest_common.sh@10 -- # set +x 00:28:00.367 [2024-04-17 10:24:33.452960] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:28:00.367 [ 00:28:00.367 { 00:28:00.367 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:00.367 "subtype": "Discovery", 00:28:00.367 "listen_addresses": [], 00:28:00.367 "allow_any_host": true, 00:28:00.367 "hosts": [] 00:28:00.367 }, 00:28:00.367 { 00:28:00.367 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:00.367 "subtype": "NVMe", 00:28:00.367 "listen_addresses": [ 00:28:00.367 { 00:28:00.367 "transport": "TCP", 00:28:00.367 "trtype": "TCP", 00:28:00.367 "adrfam": "IPv4", 00:28:00.367 "traddr": "10.0.0.2", 00:28:00.367 "trsvcid": "4420" 00:28:00.367 } 00:28:00.367 ], 00:28:00.367 "allow_any_host": true, 00:28:00.367 "hosts": [], 00:28:00.367 "serial_number": "SPDK00000000000001", 00:28:00.367 "model_number": "SPDK bdev Controller", 00:28:00.367 "max_namespaces": 2, 00:28:00.367 "min_cntlid": 1, 00:28:00.367 "max_cntlid": 65519, 00:28:00.367 "namespaces": [ 00:28:00.367 { 00:28:00.367 "nsid": 1, 00:28:00.367 "bdev_name": "Malloc0", 00:28:00.367 "name": "Malloc0", 00:28:00.367 "nguid": "16164E655CC342909445362E1FCA8CA5", 00:28:00.367 "uuid": "16164e65-5cc3-4290-9445-362e1fca8ca5" 00:28:00.367 } 00:28:00.367 ] 00:28:00.367 } 00:28:00.367 ] 00:28:00.367 10:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:00.367 10:24:33 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:00.367 10:24:33 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:00.367 10:24:33 -- host/aer.sh@33 -- # aerpid=3577869 00:28:00.367 10:24:33 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:00.367 10:24:33 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:00.367 10:24:33 -- common/autotest_common.sh@1244 -- # local i=0 00:28:00.367 10:24:33 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:00.367 10:24:33 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:28:00.367 10:24:33 -- common/autotest_common.sh@1247 -- # i=1 00:28:00.367 10:24:33 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:28:00.367 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.367 10:24:33 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:00.367 10:24:33 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:28:00.367 10:24:33 -- common/autotest_common.sh@1247 -- # i=2 00:28:00.367 10:24:33 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:28:00.367 10:24:33 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:00.367 10:24:33 -- common/autotest_common.sh@1246 -- # '[' 2 -lt 200 ']' 00:28:00.367 10:24:33 -- common/autotest_common.sh@1247 -- # i=3 00:28:00.367 10:24:33 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:28:00.626 10:24:33 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:00.626 10:24:33 -- common/autotest_common.sh@1246 -- # '[' 3 -lt 200 ']' 00:28:00.626 10:24:33 -- common/autotest_common.sh@1247 -- # i=4 00:28:00.626 10:24:33 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:28:00.626 10:24:33 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:00.626 10:24:33 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:00.626 10:24:33 -- common/autotest_common.sh@1255 -- # return 0 00:28:00.626 10:24:33 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:00.626 10:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:00.626 10:24:33 -- common/autotest_common.sh@10 -- # set +x 00:28:00.626 Malloc1 00:28:00.626 10:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:00.627 10:24:33 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:00.627 10:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:00.627 10:24:33 -- common/autotest_common.sh@10 -- # set +x 00:28:00.627 10:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:00.627 10:24:33 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:00.627 10:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:00.627 10:24:33 -- common/autotest_common.sh@10 -- # set +x 00:28:00.627 Asynchronous Event Request test 00:28:00.627 Attaching to 10.0.0.2 00:28:00.627 Attached to 10.0.0.2 00:28:00.627 Registering asynchronous event callbacks... 00:28:00.627 Starting namespace attribute notice tests for all controllers... 00:28:00.627 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:00.627 aer_cb - Changed Namespace 00:28:00.627 Cleaning up... 00:28:00.627 [ 00:28:00.627 { 00:28:00.627 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:00.627 "subtype": "Discovery", 00:28:00.627 "listen_addresses": [], 00:28:00.627 "allow_any_host": true, 00:28:00.627 "hosts": [] 00:28:00.627 }, 00:28:00.627 { 00:28:00.627 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:00.627 "subtype": "NVMe", 00:28:00.627 "listen_addresses": [ 00:28:00.627 { 00:28:00.627 "transport": "TCP", 00:28:00.627 "trtype": "TCP", 00:28:00.627 "adrfam": "IPv4", 00:28:00.627 "traddr": "10.0.0.2", 00:28:00.627 "trsvcid": "4420" 00:28:00.627 } 00:28:00.627 ], 00:28:00.627 "allow_any_host": true, 00:28:00.627 "hosts": [], 00:28:00.627 "serial_number": "SPDK00000000000001", 00:28:00.627 "model_number": "SPDK bdev Controller", 00:28:00.627 "max_namespaces": 2, 00:28:00.627 "min_cntlid": 1, 00:28:00.627 "max_cntlid": 65519, 00:28:00.627 "namespaces": [ 00:28:00.627 { 00:28:00.627 "nsid": 1, 00:28:00.627 "bdev_name": "Malloc0", 00:28:00.627 "name": "Malloc0", 00:28:00.627 "nguid": "16164E655CC342909445362E1FCA8CA5", 00:28:00.627 "uuid": "16164e65-5cc3-4290-9445-362e1fca8ca5" 00:28:00.627 }, 00:28:00.627 { 00:28:00.627 "nsid": 2, 00:28:00.627 "bdev_name": "Malloc1", 00:28:00.627 "name": "Malloc1", 00:28:00.627 "nguid": "05A6AA30998B4379AA6B4117B68F2EE7", 00:28:00.627 "uuid": "05a6aa30-998b-4379-aa6b-4117b68f2ee7" 00:28:00.627 } 00:28:00.627 ] 00:28:00.627 } 00:28:00.627 ] 00:28:00.627 10:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:00.627 10:24:33 -- host/aer.sh@43 -- # wait 3577869 00:28:00.627 10:24:33 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:00.627 10:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:00.627 10:24:33 -- common/autotest_common.sh@10 -- # set +x 00:28:00.886 10:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:00.886 10:24:33 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:00.886 10:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:00.886 10:24:33 -- common/autotest_common.sh@10 -- # set +x 00:28:00.886 10:24:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:00.886 10:24:34 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:00.886 10:24:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:00.886 10:24:34 -- common/autotest_common.sh@10 -- # set +x 00:28:00.886 10:24:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:00.886 10:24:34 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:00.886 10:24:34 -- host/aer.sh@51 -- # nvmftestfini 00:28:00.886 10:24:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:00.886 10:24:34 -- nvmf/common.sh@116 -- # sync 00:28:00.886 10:24:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:00.886 10:24:34 -- nvmf/common.sh@119 -- # set +e 00:28:00.886 10:24:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:00.886 10:24:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:00.886 rmmod nvme_tcp 00:28:00.886 rmmod nvme_fabrics 00:28:00.886 rmmod nvme_keyring 00:28:00.886 10:24:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:00.886 10:24:34 -- nvmf/common.sh@123 -- # set -e 00:28:00.886 10:24:34 -- nvmf/common.sh@124 -- # return 0 00:28:00.886 10:24:34 -- nvmf/common.sh@477 -- # '[' -n 3577584 ']' 00:28:00.886 10:24:34 -- nvmf/common.sh@478 -- # killprocess 3577584 00:28:00.886 10:24:34 -- common/autotest_common.sh@926 -- # '[' -z 3577584 ']' 00:28:00.886 10:24:34 -- common/autotest_common.sh@930 -- # kill -0 3577584 00:28:00.886 10:24:34 -- common/autotest_common.sh@931 -- # uname 00:28:00.886 10:24:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:00.886 10:24:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3577584 00:28:00.886 10:24:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:00.886 10:24:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:00.886 10:24:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3577584' 00:28:00.886 killing process with pid 3577584 00:28:00.886 10:24:34 -- common/autotest_common.sh@945 -- # kill 3577584 00:28:00.886 [2024-04-17 10:24:34.120348] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:28:00.886 10:24:34 -- common/autotest_common.sh@950 -- # wait 3577584 00:28:01.151 10:24:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:01.151 10:24:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:01.151 10:24:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:01.151 10:24:34 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:01.151 10:24:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:01.151 10:24:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.151 10:24:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:01.151 10:24:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.685 10:24:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:03.685 00:28:03.685 real 0m9.475s 00:28:03.685 user 0m8.410s 00:28:03.685 sys 0m4.456s 00:28:03.685 10:24:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:03.685 10:24:36 -- common/autotest_common.sh@10 -- # set +x 00:28:03.685 ************************************ 00:28:03.685 END TEST nvmf_aer 00:28:03.685 ************************************ 00:28:03.685 10:24:36 -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:03.685 10:24:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:03.685 10:24:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:03.685 10:24:36 -- common/autotest_common.sh@10 -- # set +x 00:28:03.685 ************************************ 00:28:03.685 START TEST nvmf_async_init 00:28:03.685 ************************************ 00:28:03.685 10:24:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:03.685 * Looking for test storage... 00:28:03.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:03.685 10:24:36 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:03.685 10:24:36 -- nvmf/common.sh@7 -- # uname -s 00:28:03.685 10:24:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.685 10:24:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.685 10:24:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.685 10:24:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.685 10:24:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.685 10:24:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.685 10:24:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.685 10:24:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.685 10:24:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.685 10:24:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.685 10:24:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:28:03.685 10:24:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:28:03.685 10:24:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.685 10:24:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.685 10:24:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:03.685 10:24:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:03.685 10:24:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.685 10:24:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.685 10:24:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.685 10:24:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.685 10:24:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.685 10:24:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.685 10:24:36 -- paths/export.sh@5 -- # export PATH 00:28:03.685 10:24:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.685 10:24:36 -- nvmf/common.sh@46 -- # : 0 00:28:03.685 10:24:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:03.685 10:24:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:03.685 10:24:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:03.685 10:24:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.685 10:24:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.685 10:24:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:03.685 10:24:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:03.685 10:24:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:03.685 10:24:36 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:03.685 10:24:36 -- host/async_init.sh@14 -- # null_block_size=512 00:28:03.685 10:24:36 -- host/async_init.sh@15 -- # null_bdev=null0 00:28:03.685 10:24:36 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:03.685 10:24:36 -- host/async_init.sh@20 -- # tr -d - 00:28:03.685 10:24:36 -- host/async_init.sh@20 -- # uuidgen 00:28:03.685 10:24:36 -- host/async_init.sh@20 -- # nguid=01a33f26bb5947aeb86dc722436b90d2 00:28:03.685 10:24:36 -- host/async_init.sh@22 -- # nvmftestinit 00:28:03.685 10:24:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:03.685 10:24:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:03.685 10:24:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:03.685 10:24:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:03.685 10:24:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:03.685 10:24:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.685 10:24:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:03.685 10:24:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.685 10:24:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:03.685 10:24:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:03.685 10:24:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:03.685 10:24:36 -- common/autotest_common.sh@10 -- # set +x 00:28:08.959 10:24:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:08.959 10:24:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:08.959 10:24:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:08.959 10:24:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:08.959 10:24:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:08.959 10:24:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:08.959 10:24:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:08.959 10:24:41 -- nvmf/common.sh@294 -- # net_devs=() 00:28:08.959 10:24:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:08.959 10:24:41 -- nvmf/common.sh@295 -- # e810=() 00:28:08.959 10:24:41 -- nvmf/common.sh@295 -- # local -ga e810 00:28:08.959 10:24:41 -- nvmf/common.sh@296 -- # x722=() 00:28:08.959 10:24:41 -- nvmf/common.sh@296 -- # local -ga x722 00:28:08.959 10:24:41 -- nvmf/common.sh@297 -- # mlx=() 00:28:08.959 10:24:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:08.959 10:24:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:08.959 10:24:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:08.959 10:24:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:08.959 10:24:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:08.959 10:24:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:08.959 10:24:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:08.959 10:24:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:08.959 10:24:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:08.959 10:24:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:08.959 10:24:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:08.959 10:24:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:08.959 10:24:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:08.959 10:24:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:08.959 10:24:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:08.959 10:24:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:08.959 10:24:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:08.959 10:24:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:08.959 10:24:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:08.959 10:24:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:08.959 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:08.959 10:24:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:08.959 10:24:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:08.959 10:24:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.959 10:24:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.959 10:24:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:08.959 10:24:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:08.959 10:24:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:08.959 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:08.959 10:24:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:08.959 10:24:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:08.959 10:24:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.959 10:24:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.959 10:24:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:08.959 10:24:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:08.959 10:24:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:08.959 10:24:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:08.959 10:24:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:08.959 10:24:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.959 10:24:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:08.959 10:24:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.959 10:24:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:08.959 Found net devices under 0000:af:00.0: cvl_0_0 00:28:08.959 10:24:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.959 10:24:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:08.959 10:24:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.959 10:24:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:08.960 10:24:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.960 10:24:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:08.960 Found net devices under 0000:af:00.1: cvl_0_1 00:28:08.960 10:24:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.960 10:24:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:08.960 10:24:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:08.960 10:24:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:08.960 10:24:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:08.960 10:24:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:08.960 10:24:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:08.960 10:24:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:08.960 10:24:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:08.960 10:24:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:08.960 10:24:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:08.960 10:24:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:08.960 10:24:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:08.960 10:24:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:08.960 10:24:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:08.960 10:24:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:08.960 10:24:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:08.960 10:24:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:08.960 10:24:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:08.960 10:24:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:08.960 10:24:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:08.960 10:24:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:08.960 10:24:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:08.960 10:24:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:08.960 10:24:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:08.960 10:24:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:08.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:08.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:28:08.960 00:28:08.960 --- 10.0.0.2 ping statistics --- 00:28:08.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.960 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:28:08.960 10:24:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:08.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:08.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:28:08.960 00:28:08.960 --- 10.0.0.1 ping statistics --- 00:28:08.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.960 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:28:08.960 10:24:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:08.960 10:24:42 -- nvmf/common.sh@410 -- # return 0 00:28:08.960 10:24:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:08.960 10:24:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:08.960 10:24:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:08.960 10:24:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:08.960 10:24:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:08.960 10:24:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:08.960 10:24:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:09.219 10:24:42 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:09.219 10:24:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:09.219 10:24:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:09.219 10:24:42 -- common/autotest_common.sh@10 -- # set +x 00:28:09.219 10:24:42 -- nvmf/common.sh@469 -- # nvmfpid=3581474 00:28:09.219 10:24:42 -- nvmf/common.sh@470 -- # waitforlisten 3581474 00:28:09.219 10:24:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:09.219 10:24:42 -- common/autotest_common.sh@819 -- # '[' -z 3581474 ']' 00:28:09.219 10:24:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.219 10:24:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:09.219 10:24:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.219 10:24:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:09.219 10:24:42 -- common/autotest_common.sh@10 -- # set +x 00:28:09.219 [2024-04-17 10:24:42.354980] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:09.219 [2024-04-17 10:24:42.355035] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.219 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.219 [2024-04-17 10:24:42.439532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.219 [2024-04-17 10:24:42.528322] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:09.219 [2024-04-17 10:24:42.528467] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:09.219 [2024-04-17 10:24:42.528480] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:09.219 [2024-04-17 10:24:42.528491] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:09.219 [2024-04-17 10:24:42.528516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.156 10:24:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:10.156 10:24:43 -- common/autotest_common.sh@852 -- # return 0 00:28:10.156 10:24:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:10.156 10:24:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:10.156 10:24:43 -- common/autotest_common.sh@10 -- # set +x 00:28:10.156 10:24:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:10.156 10:24:43 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:10.156 10:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.156 10:24:43 -- common/autotest_common.sh@10 -- # set +x 00:28:10.156 [2024-04-17 10:24:43.329857] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:10.156 10:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.156 10:24:43 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:10.156 10:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.156 10:24:43 -- common/autotest_common.sh@10 -- # set +x 00:28:10.156 null0 00:28:10.156 10:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.156 10:24:43 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:10.156 10:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.156 10:24:43 -- common/autotest_common.sh@10 -- # set +x 00:28:10.156 10:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.156 10:24:43 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:10.156 10:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.156 10:24:43 -- common/autotest_common.sh@10 -- # set +x 00:28:10.156 10:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.156 10:24:43 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 01a33f26bb5947aeb86dc722436b90d2 00:28:10.156 10:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.156 10:24:43 -- common/autotest_common.sh@10 -- # set +x 00:28:10.156 10:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.156 10:24:43 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:10.156 10:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.156 10:24:43 -- common/autotest_common.sh@10 -- # set +x 00:28:10.156 [2024-04-17 10:24:43.370101] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:10.156 10:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.156 10:24:43 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:10.156 10:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.156 10:24:43 -- common/autotest_common.sh@10 -- # set +x 00:28:10.415 nvme0n1 00:28:10.415 10:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.415 10:24:43 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:10.415 10:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.415 10:24:43 -- common/autotest_common.sh@10 -- # set +x 00:28:10.415 [ 00:28:10.415 { 00:28:10.415 "name": "nvme0n1", 00:28:10.415 "aliases": [ 00:28:10.415 "01a33f26-bb59-47ae-b86d-c722436b90d2" 00:28:10.415 ], 00:28:10.415 "product_name": "NVMe disk", 00:28:10.415 "block_size": 512, 00:28:10.415 "num_blocks": 2097152, 00:28:10.415 "uuid": "01a33f26-bb59-47ae-b86d-c722436b90d2", 00:28:10.415 "assigned_rate_limits": { 00:28:10.415 "rw_ios_per_sec": 0, 00:28:10.415 "rw_mbytes_per_sec": 0, 00:28:10.415 "r_mbytes_per_sec": 0, 00:28:10.415 "w_mbytes_per_sec": 0 00:28:10.415 }, 00:28:10.415 "claimed": false, 00:28:10.415 "zoned": false, 00:28:10.415 "supported_io_types": { 00:28:10.415 "read": true, 00:28:10.415 "write": true, 00:28:10.415 "unmap": false, 00:28:10.415 "write_zeroes": true, 00:28:10.415 "flush": true, 00:28:10.415 "reset": true, 00:28:10.415 "compare": true, 00:28:10.415 "compare_and_write": true, 00:28:10.415 "abort": true, 00:28:10.415 "nvme_admin": true, 00:28:10.415 "nvme_io": true 00:28:10.415 }, 00:28:10.415 "driver_specific": { 00:28:10.415 "nvme": [ 00:28:10.415 { 00:28:10.415 "trid": { 00:28:10.415 "trtype": "TCP", 00:28:10.415 "adrfam": "IPv4", 00:28:10.415 "traddr": "10.0.0.2", 00:28:10.415 "trsvcid": "4420", 00:28:10.415 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:10.415 }, 00:28:10.415 "ctrlr_data": { 00:28:10.415 "cntlid": 1, 00:28:10.415 "vendor_id": "0x8086", 00:28:10.415 "model_number": "SPDK bdev Controller", 00:28:10.415 "serial_number": "00000000000000000000", 00:28:10.415 "firmware_revision": "24.01.1", 00:28:10.415 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:10.415 "oacs": { 00:28:10.415 "security": 0, 00:28:10.415 "format": 0, 00:28:10.415 "firmware": 0, 00:28:10.415 "ns_manage": 0 00:28:10.415 }, 00:28:10.415 "multi_ctrlr": true, 00:28:10.415 "ana_reporting": false 00:28:10.415 }, 00:28:10.415 "vs": { 00:28:10.415 "nvme_version": "1.3" 00:28:10.415 }, 00:28:10.415 "ns_data": { 00:28:10.415 "id": 1, 00:28:10.415 "can_share": true 00:28:10.415 } 00:28:10.415 } 00:28:10.415 ], 00:28:10.415 "mp_policy": "active_passive" 00:28:10.415 } 00:28:10.415 } 00:28:10.415 ] 00:28:10.415 10:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.415 10:24:43 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:10.415 10:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.415 10:24:43 -- common/autotest_common.sh@10 -- # set +x 00:28:10.416 [2024-04-17 10:24:43.614597] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:10.416 [2024-04-17 10:24:43.614677] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea21a0 (9): Bad file descriptor 00:28:10.416 [2024-04-17 10:24:43.746759] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:10.675 10:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.675 10:24:43 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:10.675 10:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.675 10:24:43 -- common/autotest_common.sh@10 -- # set +x 00:28:10.675 [ 00:28:10.675 { 00:28:10.675 "name": "nvme0n1", 00:28:10.675 "aliases": [ 00:28:10.675 "01a33f26-bb59-47ae-b86d-c722436b90d2" 00:28:10.675 ], 00:28:10.675 "product_name": "NVMe disk", 00:28:10.675 "block_size": 512, 00:28:10.675 "num_blocks": 2097152, 00:28:10.675 "uuid": "01a33f26-bb59-47ae-b86d-c722436b90d2", 00:28:10.675 "assigned_rate_limits": { 00:28:10.675 "rw_ios_per_sec": 0, 00:28:10.675 "rw_mbytes_per_sec": 0, 00:28:10.675 "r_mbytes_per_sec": 0, 00:28:10.675 "w_mbytes_per_sec": 0 00:28:10.675 }, 00:28:10.675 "claimed": false, 00:28:10.675 "zoned": false, 00:28:10.675 "supported_io_types": { 00:28:10.675 "read": true, 00:28:10.675 "write": true, 00:28:10.675 "unmap": false, 00:28:10.675 "write_zeroes": true, 00:28:10.675 "flush": true, 00:28:10.675 "reset": true, 00:28:10.675 "compare": true, 00:28:10.675 "compare_and_write": true, 00:28:10.675 "abort": true, 00:28:10.675 "nvme_admin": true, 00:28:10.675 "nvme_io": true 00:28:10.675 }, 00:28:10.675 "driver_specific": { 00:28:10.675 "nvme": [ 00:28:10.675 { 00:28:10.675 "trid": { 00:28:10.675 "trtype": "TCP", 00:28:10.675 "adrfam": "IPv4", 00:28:10.675 "traddr": "10.0.0.2", 00:28:10.675 "trsvcid": "4420", 00:28:10.675 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:10.675 }, 00:28:10.675 "ctrlr_data": { 00:28:10.675 "cntlid": 2, 00:28:10.675 "vendor_id": "0x8086", 00:28:10.675 "model_number": "SPDK bdev Controller", 00:28:10.675 "serial_number": "00000000000000000000", 00:28:10.675 "firmware_revision": "24.01.1", 00:28:10.676 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:10.676 "oacs": { 00:28:10.676 "security": 0, 00:28:10.676 "format": 0, 00:28:10.676 "firmware": 0, 00:28:10.676 "ns_manage": 0 00:28:10.676 }, 00:28:10.676 "multi_ctrlr": true, 00:28:10.676 "ana_reporting": false 00:28:10.676 }, 00:28:10.676 "vs": { 00:28:10.676 "nvme_version": "1.3" 00:28:10.676 }, 00:28:10.676 "ns_data": { 00:28:10.676 "id": 1, 00:28:10.676 "can_share": true 00:28:10.676 } 00:28:10.676 } 00:28:10.676 ], 00:28:10.676 "mp_policy": "active_passive" 00:28:10.676 } 00:28:10.676 } 00:28:10.676 ] 00:28:10.676 10:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.676 10:24:43 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.676 10:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.676 10:24:43 -- common/autotest_common.sh@10 -- # set +x 00:28:10.676 10:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.676 10:24:43 -- host/async_init.sh@53 -- # mktemp 00:28:10.676 10:24:43 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.FWTDtrBhI3 00:28:10.676 10:24:43 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:10.676 10:24:43 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.FWTDtrBhI3 00:28:10.676 10:24:43 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:10.676 10:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.676 10:24:43 -- common/autotest_common.sh@10 -- # set +x 00:28:10.676 10:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.676 10:24:43 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:10.676 10:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.676 10:24:43 -- common/autotest_common.sh@10 -- # set +x 00:28:10.676 [2024-04-17 10:24:43.803231] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:10.676 [2024-04-17 10:24:43.803380] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:10.676 10:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.676 10:24:43 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FWTDtrBhI3 00:28:10.676 10:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.676 10:24:43 -- common/autotest_common.sh@10 -- # set +x 00:28:10.676 10:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.676 10:24:43 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FWTDtrBhI3 00:28:10.676 10:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.676 10:24:43 -- common/autotest_common.sh@10 -- # set +x 00:28:10.676 [2024-04-17 10:24:43.819273] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:10.676 nvme0n1 00:28:10.676 10:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.676 10:24:43 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:10.676 10:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.676 10:24:43 -- common/autotest_common.sh@10 -- # set +x 00:28:10.676 [ 00:28:10.676 { 00:28:10.676 "name": "nvme0n1", 00:28:10.676 "aliases": [ 00:28:10.676 "01a33f26-bb59-47ae-b86d-c722436b90d2" 00:28:10.676 ], 00:28:10.676 "product_name": "NVMe disk", 00:28:10.676 "block_size": 512, 00:28:10.676 "num_blocks": 2097152, 00:28:10.676 "uuid": "01a33f26-bb59-47ae-b86d-c722436b90d2", 00:28:10.676 "assigned_rate_limits": { 00:28:10.676 "rw_ios_per_sec": 0, 00:28:10.676 "rw_mbytes_per_sec": 0, 00:28:10.676 "r_mbytes_per_sec": 0, 00:28:10.676 "w_mbytes_per_sec": 0 00:28:10.676 }, 00:28:10.676 "claimed": false, 00:28:10.676 "zoned": false, 00:28:10.676 "supported_io_types": { 00:28:10.676 "read": true, 00:28:10.676 "write": true, 00:28:10.676 "unmap": false, 00:28:10.676 "write_zeroes": true, 00:28:10.676 "flush": true, 00:28:10.676 "reset": true, 00:28:10.676 "compare": true, 00:28:10.676 "compare_and_write": true, 00:28:10.676 "abort": true, 00:28:10.676 "nvme_admin": true, 00:28:10.676 "nvme_io": true 00:28:10.676 }, 00:28:10.676 "driver_specific": { 00:28:10.676 "nvme": [ 00:28:10.676 { 00:28:10.676 "trid": { 00:28:10.676 "trtype": "TCP", 00:28:10.676 "adrfam": "IPv4", 00:28:10.676 "traddr": "10.0.0.2", 00:28:10.676 "trsvcid": "4421", 00:28:10.676 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:10.676 }, 00:28:10.676 "ctrlr_data": { 00:28:10.676 "cntlid": 3, 00:28:10.676 "vendor_id": "0x8086", 00:28:10.676 "model_number": "SPDK bdev Controller", 00:28:10.676 "serial_number": "00000000000000000000", 00:28:10.676 "firmware_revision": "24.01.1", 00:28:10.676 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:10.676 "oacs": { 00:28:10.676 "security": 0, 00:28:10.676 "format": 0, 00:28:10.676 "firmware": 0, 00:28:10.676 "ns_manage": 0 00:28:10.676 }, 00:28:10.676 "multi_ctrlr": true, 00:28:10.676 "ana_reporting": false 00:28:10.676 }, 00:28:10.676 "vs": { 00:28:10.676 "nvme_version": "1.3" 00:28:10.676 }, 00:28:10.676 "ns_data": { 00:28:10.676 "id": 1, 00:28:10.676 "can_share": true 00:28:10.676 } 00:28:10.676 } 00:28:10.676 ], 00:28:10.676 "mp_policy": "active_passive" 00:28:10.676 } 00:28:10.676 } 00:28:10.676 ] 00:28:10.676 10:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.676 10:24:43 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.676 10:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.676 10:24:43 -- common/autotest_common.sh@10 -- # set +x 00:28:10.676 10:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.676 10:24:43 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.FWTDtrBhI3 00:28:10.676 10:24:43 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:10.676 10:24:43 -- host/async_init.sh@78 -- # nvmftestfini 00:28:10.676 10:24:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:10.676 10:24:43 -- nvmf/common.sh@116 -- # sync 00:28:10.676 10:24:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:10.676 10:24:43 -- nvmf/common.sh@119 -- # set +e 00:28:10.676 10:24:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:10.676 10:24:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:10.676 rmmod nvme_tcp 00:28:10.676 rmmod nvme_fabrics 00:28:10.676 rmmod nvme_keyring 00:28:10.676 10:24:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:10.676 10:24:43 -- nvmf/common.sh@123 -- # set -e 00:28:10.676 10:24:43 -- nvmf/common.sh@124 -- # return 0 00:28:10.676 10:24:43 -- nvmf/common.sh@477 -- # '[' -n 3581474 ']' 00:28:10.676 10:24:43 -- nvmf/common.sh@478 -- # killprocess 3581474 00:28:10.676 10:24:43 -- common/autotest_common.sh@926 -- # '[' -z 3581474 ']' 00:28:10.676 10:24:43 -- common/autotest_common.sh@930 -- # kill -0 3581474 00:28:10.676 10:24:43 -- common/autotest_common.sh@931 -- # uname 00:28:10.676 10:24:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:10.676 10:24:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3581474 00:28:10.935 10:24:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:10.935 10:24:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:10.935 10:24:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3581474' 00:28:10.935 killing process with pid 3581474 00:28:10.935 10:24:44 -- common/autotest_common.sh@945 -- # kill 3581474 00:28:10.935 10:24:44 -- common/autotest_common.sh@950 -- # wait 3581474 00:28:10.935 10:24:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:10.935 10:24:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:10.935 10:24:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:10.935 10:24:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:10.935 10:24:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:10.935 10:24:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.935 10:24:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:10.935 10:24:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.468 10:24:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:13.468 00:28:13.468 real 0m9.849s 00:28:13.468 user 0m3.873s 00:28:13.468 sys 0m4.600s 00:28:13.468 10:24:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:13.468 10:24:46 -- common/autotest_common.sh@10 -- # set +x 00:28:13.468 ************************************ 00:28:13.468 END TEST nvmf_async_init 00:28:13.468 ************************************ 00:28:13.468 10:24:46 -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:13.468 10:24:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:13.468 10:24:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:13.468 10:24:46 -- common/autotest_common.sh@10 -- # set +x 00:28:13.468 ************************************ 00:28:13.468 START TEST dma 00:28:13.468 ************************************ 00:28:13.468 10:24:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:13.468 * Looking for test storage... 00:28:13.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:13.468 10:24:46 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:13.468 10:24:46 -- nvmf/common.sh@7 -- # uname -s 00:28:13.468 10:24:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.468 10:24:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.468 10:24:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.468 10:24:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.468 10:24:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.468 10:24:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.468 10:24:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.468 10:24:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.468 10:24:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.468 10:24:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.468 10:24:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:28:13.468 10:24:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:28:13.468 10:24:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.468 10:24:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.468 10:24:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.468 10:24:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:13.468 10:24:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.468 10:24:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.468 10:24:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.468 10:24:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.468 10:24:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.468 10:24:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.468 10:24:46 -- paths/export.sh@5 -- # export PATH 00:28:13.468 10:24:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.468 10:24:46 -- nvmf/common.sh@46 -- # : 0 00:28:13.468 10:24:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:13.468 10:24:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:13.468 10:24:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:13.468 10:24:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.468 10:24:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.468 10:24:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:13.468 10:24:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:13.468 10:24:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:13.468 10:24:46 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:13.468 10:24:46 -- host/dma.sh@13 -- # exit 0 00:28:13.468 00:28:13.468 real 0m0.111s 00:28:13.468 user 0m0.065s 00:28:13.468 sys 0m0.054s 00:28:13.468 10:24:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:13.468 10:24:46 -- common/autotest_common.sh@10 -- # set +x 00:28:13.468 ************************************ 00:28:13.468 END TEST dma 00:28:13.468 ************************************ 00:28:13.468 10:24:46 -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:13.468 10:24:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:13.468 10:24:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:13.468 10:24:46 -- common/autotest_common.sh@10 -- # set +x 00:28:13.468 ************************************ 00:28:13.468 START TEST nvmf_identify 00:28:13.468 ************************************ 00:28:13.468 10:24:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:13.468 * Looking for test storage... 00:28:13.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:13.468 10:24:46 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:13.468 10:24:46 -- nvmf/common.sh@7 -- # uname -s 00:28:13.468 10:24:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.468 10:24:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.468 10:24:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.468 10:24:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.468 10:24:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.468 10:24:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.468 10:24:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.468 10:24:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.468 10:24:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.468 10:24:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.468 10:24:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:28:13.468 10:24:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:28:13.468 10:24:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.468 10:24:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.468 10:24:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.468 10:24:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:13.468 10:24:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.468 10:24:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.468 10:24:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.468 10:24:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.468 10:24:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.468 10:24:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.468 10:24:46 -- paths/export.sh@5 -- # export PATH 00:28:13.468 10:24:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.468 10:24:46 -- nvmf/common.sh@46 -- # : 0 00:28:13.468 10:24:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:13.468 10:24:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:13.468 10:24:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:13.468 10:24:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.468 10:24:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.468 10:24:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:13.468 10:24:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:13.468 10:24:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:13.468 10:24:46 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:13.468 10:24:46 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:13.468 10:24:46 -- host/identify.sh@14 -- # nvmftestinit 00:28:13.468 10:24:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:13.468 10:24:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:13.468 10:24:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:13.468 10:24:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:13.468 10:24:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:13.468 10:24:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.468 10:24:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:13.468 10:24:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.468 10:24:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:13.468 10:24:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:13.468 10:24:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:13.468 10:24:46 -- common/autotest_common.sh@10 -- # set +x 00:28:20.035 10:24:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:20.035 10:24:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:20.035 10:24:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:20.035 10:24:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:20.035 10:24:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:20.035 10:24:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:20.035 10:24:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:20.035 10:24:52 -- nvmf/common.sh@294 -- # net_devs=() 00:28:20.035 10:24:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:20.036 10:24:52 -- nvmf/common.sh@295 -- # e810=() 00:28:20.036 10:24:52 -- nvmf/common.sh@295 -- # local -ga e810 00:28:20.036 10:24:52 -- nvmf/common.sh@296 -- # x722=() 00:28:20.036 10:24:52 -- nvmf/common.sh@296 -- # local -ga x722 00:28:20.036 10:24:52 -- nvmf/common.sh@297 -- # mlx=() 00:28:20.036 10:24:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:20.036 10:24:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:20.036 10:24:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:20.036 10:24:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:20.036 10:24:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:20.036 10:24:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:20.036 10:24:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:20.036 10:24:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:20.036 10:24:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:20.036 10:24:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:20.036 10:24:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:20.036 10:24:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:20.036 10:24:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:20.036 10:24:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:20.036 10:24:52 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:20.036 10:24:52 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:20.036 10:24:52 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:20.036 10:24:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:20.036 10:24:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:20.036 10:24:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:20.036 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:20.036 10:24:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:20.036 10:24:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:20.036 10:24:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.036 10:24:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.036 10:24:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:20.036 10:24:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:20.036 10:24:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:20.036 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:20.036 10:24:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:20.036 10:24:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:20.036 10:24:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.036 10:24:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.036 10:24:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:20.036 10:24:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:20.036 10:24:52 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:20.036 10:24:52 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:20.036 10:24:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:20.036 10:24:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.036 10:24:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:20.036 10:24:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.036 10:24:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:20.036 Found net devices under 0000:af:00.0: cvl_0_0 00:28:20.036 10:24:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.036 10:24:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:20.036 10:24:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.036 10:24:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:20.036 10:24:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.036 10:24:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:20.036 Found net devices under 0000:af:00.1: cvl_0_1 00:28:20.036 10:24:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.036 10:24:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:20.036 10:24:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:20.036 10:24:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:20.036 10:24:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:20.036 10:24:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:20.036 10:24:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:20.036 10:24:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:20.036 10:24:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:20.036 10:24:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:20.036 10:24:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:20.036 10:24:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:20.036 10:24:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:20.036 10:24:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:20.036 10:24:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:20.036 10:24:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:20.036 10:24:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:20.036 10:24:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:20.036 10:24:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:20.036 10:24:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:20.036 10:24:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:20.036 10:24:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:20.036 10:24:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:20.036 10:24:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:20.036 10:24:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:20.036 10:24:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:20.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:20.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:28:20.036 00:28:20.036 --- 10.0.0.2 ping statistics --- 00:28:20.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.036 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:28:20.036 10:24:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:20.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:20.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:28:20.036 00:28:20.036 --- 10.0.0.1 ping statistics --- 00:28:20.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.036 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:28:20.036 10:24:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:20.036 10:24:52 -- nvmf/common.sh@410 -- # return 0 00:28:20.036 10:24:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:20.036 10:24:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:20.036 10:24:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:20.036 10:24:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:20.036 10:24:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:20.036 10:24:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:20.036 10:24:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:20.036 10:24:52 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:20.036 10:24:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:20.036 10:24:52 -- common/autotest_common.sh@10 -- # set +x 00:28:20.036 10:24:52 -- host/identify.sh@19 -- # nvmfpid=3585432 00:28:20.036 10:24:52 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:20.036 10:24:52 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:20.036 10:24:52 -- host/identify.sh@23 -- # waitforlisten 3585432 00:28:20.036 10:24:52 -- common/autotest_common.sh@819 -- # '[' -z 3585432 ']' 00:28:20.036 10:24:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.036 10:24:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:20.036 10:24:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.036 10:24:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:20.036 10:24:52 -- common/autotest_common.sh@10 -- # set +x 00:28:20.036 [2024-04-17 10:24:52.439212] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:20.036 [2024-04-17 10:24:52.439251] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.036 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.036 [2024-04-17 10:24:52.514697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:20.036 [2024-04-17 10:24:52.604417] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:20.036 [2024-04-17 10:24:52.604562] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:20.036 [2024-04-17 10:24:52.604574] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:20.036 [2024-04-17 10:24:52.604583] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:20.036 [2024-04-17 10:24:52.604631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.037 [2024-04-17 10:24:52.604694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:20.037 [2024-04-17 10:24:52.604696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.037 [2024-04-17 10:24:52.604657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:20.297 10:24:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:20.297 10:24:53 -- common/autotest_common.sh@852 -- # return 0 00:28:20.297 10:24:53 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:20.297 10:24:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.297 10:24:53 -- common/autotest_common.sh@10 -- # set +x 00:28:20.297 [2024-04-17 10:24:53.386337] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:20.297 10:24:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.297 10:24:53 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:20.297 10:24:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:20.297 10:24:53 -- common/autotest_common.sh@10 -- # set +x 00:28:20.297 10:24:53 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:20.297 10:24:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.297 10:24:53 -- common/autotest_common.sh@10 -- # set +x 00:28:20.297 Malloc0 00:28:20.297 10:24:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.297 10:24:53 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:20.297 10:24:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.297 10:24:53 -- common/autotest_common.sh@10 -- # set +x 00:28:20.297 10:24:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.297 10:24:53 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:20.297 10:24:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.297 10:24:53 -- common/autotest_common.sh@10 -- # set +x 00:28:20.297 10:24:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.297 10:24:53 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:20.297 10:24:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.297 10:24:53 -- common/autotest_common.sh@10 -- # set +x 00:28:20.297 [2024-04-17 10:24:53.474345] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:20.297 10:24:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.297 10:24:53 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:20.297 10:24:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.297 10:24:53 -- common/autotest_common.sh@10 -- # set +x 00:28:20.297 10:24:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.297 10:24:53 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:20.297 10:24:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.297 10:24:53 -- common/autotest_common.sh@10 -- # set +x 00:28:20.297 [2024-04-17 10:24:53.490116] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:28:20.297 [ 00:28:20.297 { 00:28:20.297 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:20.297 "subtype": "Discovery", 00:28:20.297 "listen_addresses": [ 00:28:20.297 { 00:28:20.297 "transport": "TCP", 00:28:20.297 "trtype": "TCP", 00:28:20.297 "adrfam": "IPv4", 00:28:20.297 "traddr": "10.0.0.2", 00:28:20.297 "trsvcid": "4420" 00:28:20.297 } 00:28:20.297 ], 00:28:20.297 "allow_any_host": true, 00:28:20.297 "hosts": [] 00:28:20.297 }, 00:28:20.297 { 00:28:20.297 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:20.297 "subtype": "NVMe", 00:28:20.297 "listen_addresses": [ 00:28:20.297 { 00:28:20.297 "transport": "TCP", 00:28:20.297 "trtype": "TCP", 00:28:20.297 "adrfam": "IPv4", 00:28:20.297 "traddr": "10.0.0.2", 00:28:20.297 "trsvcid": "4420" 00:28:20.297 } 00:28:20.297 ], 00:28:20.297 "allow_any_host": true, 00:28:20.297 "hosts": [], 00:28:20.297 "serial_number": "SPDK00000000000001", 00:28:20.297 "model_number": "SPDK bdev Controller", 00:28:20.297 "max_namespaces": 32, 00:28:20.297 "min_cntlid": 1, 00:28:20.297 "max_cntlid": 65519, 00:28:20.297 "namespaces": [ 00:28:20.297 { 00:28:20.297 "nsid": 1, 00:28:20.297 "bdev_name": "Malloc0", 00:28:20.297 "name": "Malloc0", 00:28:20.297 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:20.297 "eui64": "ABCDEF0123456789", 00:28:20.297 "uuid": "10e93e93-d052-4bf2-894e-bec0648c97e7" 00:28:20.297 } 00:28:20.297 ] 00:28:20.297 } 00:28:20.297 ] 00:28:20.297 10:24:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.297 10:24:53 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:20.297 [2024-04-17 10:24:53.525395] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:20.297 [2024-04-17 10:24:53.525441] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3585713 ] 00:28:20.297 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.297 [2024-04-17 10:24:53.563160] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:20.297 [2024-04-17 10:24:53.563216] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:20.297 [2024-04-17 10:24:53.563222] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:20.297 [2024-04-17 10:24:53.563235] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:20.297 [2024-04-17 10:24:53.563244] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:20.297 [2024-04-17 10:24:53.563638] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:20.297 [2024-04-17 10:24:53.563680] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x22fc9e0 0 00:28:20.297 [2024-04-17 10:24:53.577651] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:20.297 [2024-04-17 10:24:53.577667] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:20.297 [2024-04-17 10:24:53.577674] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:20.297 [2024-04-17 10:24:53.577679] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:20.297 [2024-04-17 10:24:53.577725] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.297 [2024-04-17 10:24:53.577732] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.297 [2024-04-17 10:24:53.577738] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22fc9e0) 00:28:20.297 [2024-04-17 10:24:53.577752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:20.297 [2024-04-17 10:24:53.577777] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364730, cid 0, qid 0 00:28:20.297 [2024-04-17 10:24:53.585656] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.297 [2024-04-17 10:24:53.585667] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.297 [2024-04-17 10:24:53.585672] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.297 [2024-04-17 10:24:53.585678] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2364730) on tqpair=0x22fc9e0 00:28:20.297 [2024-04-17 10:24:53.585690] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:20.297 [2024-04-17 10:24:53.585698] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:20.297 [2024-04-17 10:24:53.585704] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:20.297 [2024-04-17 10:24:53.585722] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.297 [2024-04-17 10:24:53.585728] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.297 [2024-04-17 10:24:53.585732] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22fc9e0) 00:28:20.297 [2024-04-17 10:24:53.585742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.297 [2024-04-17 10:24:53.585758] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364730, cid 0, qid 0 00:28:20.297 [2024-04-17 10:24:53.585964] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.297 [2024-04-17 10:24:53.585973] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.297 [2024-04-17 10:24:53.585977] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.297 [2024-04-17 10:24:53.585982] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2364730) on tqpair=0x22fc9e0 00:28:20.297 [2024-04-17 10:24:53.585993] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:20.297 [2024-04-17 10:24:53.586004] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:20.297 [2024-04-17 10:24:53.586013] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.297 [2024-04-17 10:24:53.586018] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.297 [2024-04-17 10:24:53.586023] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22fc9e0) 00:28:20.298 [2024-04-17 10:24:53.586031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.298 [2024-04-17 10:24:53.586045] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364730, cid 0, qid 0 00:28:20.298 [2024-04-17 10:24:53.586146] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.298 [2024-04-17 10:24:53.586155] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.298 [2024-04-17 10:24:53.586159] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.298 [2024-04-17 10:24:53.586164] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2364730) on tqpair=0x22fc9e0 00:28:20.298 [2024-04-17 10:24:53.586171] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:20.298 [2024-04-17 10:24:53.586182] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:20.298 [2024-04-17 10:24:53.586190] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.298 [2024-04-17 10:24:53.586195] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.298 [2024-04-17 10:24:53.586199] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22fc9e0) 00:28:20.298 [2024-04-17 10:24:53.586208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.298 [2024-04-17 10:24:53.586224] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364730, cid 0, qid 0 00:28:20.298 [2024-04-17 10:24:53.586358] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.298 [2024-04-17 10:24:53.586366] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.298 [2024-04-17 10:24:53.586370] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.298 [2024-04-17 10:24:53.586375] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2364730) on tqpair=0x22fc9e0 00:28:20.298 [2024-04-17 10:24:53.586382] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:20.298 [2024-04-17 10:24:53.586394] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.298 [2024-04-17 10:24:53.586399] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.298 [2024-04-17 10:24:53.586404] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22fc9e0) 00:28:20.298 [2024-04-17 10:24:53.586412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.298 [2024-04-17 10:24:53.586425] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364730, cid 0, qid 0 00:28:20.298 [2024-04-17 10:24:53.586522] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.298 [2024-04-17 10:24:53.586530] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.298 [2024-04-17 10:24:53.586534] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.298 [2024-04-17 10:24:53.586539] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2364730) on tqpair=0x22fc9e0 00:28:20.298 [2024-04-17 10:24:53.586546] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:20.298 [2024-04-17 10:24:53.586552] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:20.298 [2024-04-17 10:24:53.586562] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:20.298 [2024-04-17 10:24:53.586669] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:20.298 [2024-04-17 10:24:53.586676] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:20.298 [2024-04-17 10:24:53.586686] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.298 [2024-04-17 10:24:53.586691] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.298 [2024-04-17 10:24:53.586696] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22fc9e0) 00:28:20.298 [2024-04-17 10:24:53.586704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.298 [2024-04-17 10:24:53.586718] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364730, cid 0, qid 0 00:28:20.298 [2024-04-17 10:24:53.586985] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.298 [2024-04-17 10:24:53.586993] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.298 [2024-04-17 10:24:53.586998] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.298 [2024-04-17 10:24:53.587002] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2364730) on tqpair=0x22fc9e0 00:28:20.298 [2024-04-17 10:24:53.587009] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:20.298 [2024-04-17 10:24:53.587021] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.298 [2024-04-17 10:24:53.587027] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.298 [2024-04-17 10:24:53.587031] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22fc9e0) 00:28:20.298 [2024-04-17 10:24:53.587042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.298 [2024-04-17 10:24:53.587056] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364730, cid 0, qid 0 00:28:20.298 [2024-04-17 10:24:53.587152] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.298 [2024-04-17 10:24:53.587160] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.298 [2024-04-17 10:24:53.587165] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.298 [2024-04-17 10:24:53.587169] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2364730) on tqpair=0x22fc9e0 00:28:20.298 [2024-04-17 10:24:53.587176] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:20.298 [2024-04-17 10:24:53.587182] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:20.298 [2024-04-17 10:24:53.587192] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:20.298 [2024-04-17 10:24:53.587202] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:20.298 [2024-04-17 10:24:53.587212] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.298 [2024-04-17 10:24:53.587217] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.298 [2024-04-17 10:24:53.587222] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22fc9e0) 00:28:20.298 [2024-04-17 10:24:53.587231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.298 [2024-04-17 10:24:53.587244] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364730, cid 0, qid 0 00:28:20.298 [2024-04-17 10:24:53.587368] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:20.298 [2024-04-17 10:24:53.587377] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:20.298 [2024-04-17 10:24:53.587382] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:20.298 [2024-04-17 10:24:53.587387] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22fc9e0): datao=0, datal=4096, cccid=0 00:28:20.298 [2024-04-17 10:24:53.587393] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2364730) on tqpair(0x22fc9e0): expected_datao=0, payload_size=4096 00:28:20.298 [2024-04-17 10:24:53.587437] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:20.298 [2024-04-17 10:24:53.587443] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.628829] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.563 [2024-04-17 10:24:53.628847] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.563 [2024-04-17 10:24:53.628851] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.628856] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2364730) on tqpair=0x22fc9e0 00:28:20.563 [2024-04-17 10:24:53.628869] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:20.563 [2024-04-17 10:24:53.628879] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:20.563 [2024-04-17 10:24:53.628886] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:20.563 [2024-04-17 10:24:53.628892] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:20.563 [2024-04-17 10:24:53.628898] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:20.563 [2024-04-17 10:24:53.628904] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:20.563 [2024-04-17 10:24:53.628919] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:20.563 [2024-04-17 10:24:53.628928] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.628933] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.628938] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22fc9e0) 00:28:20.563 [2024-04-17 10:24:53.628949] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:20.563 [2024-04-17 10:24:53.628965] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364730, cid 0, qid 0 00:28:20.563 [2024-04-17 10:24:53.629093] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.563 [2024-04-17 10:24:53.629101] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.563 [2024-04-17 10:24:53.629106] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.629110] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2364730) on tqpair=0x22fc9e0 00:28:20.563 [2024-04-17 10:24:53.629120] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.629125] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.629130] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22fc9e0) 00:28:20.563 [2024-04-17 10:24:53.629138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.563 [2024-04-17 10:24:53.629145] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.629150] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.629154] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x22fc9e0) 00:28:20.563 [2024-04-17 10:24:53.629161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.563 [2024-04-17 10:24:53.629169] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.629173] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.629178] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x22fc9e0) 00:28:20.563 [2024-04-17 10:24:53.629185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.563 [2024-04-17 10:24:53.629192] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.629197] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.629201] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22fc9e0) 00:28:20.563 [2024-04-17 10:24:53.629208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.563 [2024-04-17 10:24:53.629214] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:20.563 [2024-04-17 10:24:53.629228] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:20.563 [2024-04-17 10:24:53.629236] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.629241] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.629246] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22fc9e0) 00:28:20.563 [2024-04-17 10:24:53.629254] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.563 [2024-04-17 10:24:53.629268] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364730, cid 0, qid 0 00:28:20.563 [2024-04-17 10:24:53.629278] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364890, cid 1, qid 0 00:28:20.563 [2024-04-17 10:24:53.629283] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23649f0, cid 2, qid 0 00:28:20.563 [2024-04-17 10:24:53.629289] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364b50, cid 3, qid 0 00:28:20.563 [2024-04-17 10:24:53.629295] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364cb0, cid 4, qid 0 00:28:20.563 [2024-04-17 10:24:53.629455] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.563 [2024-04-17 10:24:53.629464] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.563 [2024-04-17 10:24:53.629468] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.629473] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2364cb0) on tqpair=0x22fc9e0 00:28:20.563 [2024-04-17 10:24:53.629480] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:20.563 [2024-04-17 10:24:53.629487] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:20.563 [2024-04-17 10:24:53.629500] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.629506] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.629510] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22fc9e0) 00:28:20.563 [2024-04-17 10:24:53.629518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.563 [2024-04-17 10:24:53.629531] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364cb0, cid 4, qid 0 00:28:20.563 [2024-04-17 10:24:53.633651] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:20.563 [2024-04-17 10:24:53.633663] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:20.563 [2024-04-17 10:24:53.633668] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.633673] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22fc9e0): datao=0, datal=4096, cccid=4 00:28:20.563 [2024-04-17 10:24:53.633679] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2364cb0) on tqpair(0x22fc9e0): expected_datao=0, payload_size=4096 00:28:20.563 [2024-04-17 10:24:53.633694] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.633699] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.633709] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.563 [2024-04-17 10:24:53.633717] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.563 [2024-04-17 10:24:53.633721] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.633725] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2364cb0) on tqpair=0x22fc9e0 00:28:20.563 [2024-04-17 10:24:53.633741] nvme_ctrlr.c:4023:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:20.563 [2024-04-17 10:24:53.633765] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.633771] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.633775] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22fc9e0) 00:28:20.563 [2024-04-17 10:24:53.633784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.563 [2024-04-17 10:24:53.633792] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.633797] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.633802] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22fc9e0) 00:28:20.563 [2024-04-17 10:24:53.633813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.563 [2024-04-17 10:24:53.633836] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364cb0, cid 4, qid 0 00:28:20.563 [2024-04-17 10:24:53.633844] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364e10, cid 5, qid 0 00:28:20.563 [2024-04-17 10:24:53.634120] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:20.563 [2024-04-17 10:24:53.634129] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:20.563 [2024-04-17 10:24:53.634133] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.634137] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22fc9e0): datao=0, datal=1024, cccid=4 00:28:20.563 [2024-04-17 10:24:53.634143] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2364cb0) on tqpair(0x22fc9e0): expected_datao=0, payload_size=1024 00:28:20.563 [2024-04-17 10:24:53.634152] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.634157] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.634164] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.563 [2024-04-17 10:24:53.634171] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.563 [2024-04-17 10:24:53.634177] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.563 [2024-04-17 10:24:53.634181] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2364e10) on tqpair=0x22fc9e0 00:28:20.563 [2024-04-17 10:24:53.674867] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.563 [2024-04-17 10:24:53.674883] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.564 [2024-04-17 10:24:53.674888] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.564 [2024-04-17 10:24:53.674893] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2364cb0) on tqpair=0x22fc9e0 00:28:20.564 [2024-04-17 10:24:53.674910] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.564 [2024-04-17 10:24:53.674915] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.564 [2024-04-17 10:24:53.674920] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22fc9e0) 00:28:20.564 [2024-04-17 10:24:53.674931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.564 [2024-04-17 10:24:53.674952] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364cb0, cid 4, qid 0 00:28:20.564 [2024-04-17 10:24:53.675094] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:20.564 [2024-04-17 10:24:53.675102] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:20.564 [2024-04-17 10:24:53.675107] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:20.564 [2024-04-17 10:24:53.675112] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22fc9e0): datao=0, datal=3072, cccid=4 00:28:20.564 [2024-04-17 10:24:53.675117] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2364cb0) on tqpair(0x22fc9e0): expected_datao=0, payload_size=3072 00:28:20.564 [2024-04-17 10:24:53.675142] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:20.564 [2024-04-17 10:24:53.675147] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:20.564 [2024-04-17 10:24:53.719656] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.564 [2024-04-17 10:24:53.719670] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.564 [2024-04-17 10:24:53.719674] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.564 [2024-04-17 10:24:53.719679] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2364cb0) on tqpair=0x22fc9e0 00:28:20.564 [2024-04-17 10:24:53.719692] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.564 [2024-04-17 10:24:53.719697] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.564 [2024-04-17 10:24:53.719702] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22fc9e0) 00:28:20.564 [2024-04-17 10:24:53.719716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.564 [2024-04-17 10:24:53.719736] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364cb0, cid 4, qid 0 00:28:20.564 [2024-04-17 10:24:53.719859] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:20.564 [2024-04-17 10:24:53.719868] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:20.564 [2024-04-17 10:24:53.719872] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:20.564 [2024-04-17 10:24:53.719877] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22fc9e0): datao=0, datal=8, cccid=4 00:28:20.564 [2024-04-17 10:24:53.719883] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2364cb0) on tqpair(0x22fc9e0): expected_datao=0, payload_size=8 00:28:20.564 [2024-04-17 10:24:53.719892] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:20.564 [2024-04-17 10:24:53.719897] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:20.564 [2024-04-17 10:24:53.761808] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.564 [2024-04-17 10:24:53.761824] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.564 [2024-04-17 10:24:53.761828] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.564 [2024-04-17 10:24:53.761834] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2364cb0) on tqpair=0x22fc9e0 00:28:20.564 ===================================================== 00:28:20.564 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:20.564 ===================================================== 00:28:20.564 Controller Capabilities/Features 00:28:20.564 ================================ 00:28:20.564 Vendor ID: 0000 00:28:20.564 Subsystem Vendor ID: 0000 00:28:20.564 Serial Number: .................... 00:28:20.564 Model Number: ........................................ 00:28:20.564 Firmware Version: 24.01.1 00:28:20.564 Recommended Arb Burst: 0 00:28:20.564 IEEE OUI Identifier: 00 00 00 00:28:20.564 Multi-path I/O 00:28:20.564 May have multiple subsystem ports: No 00:28:20.564 May have multiple controllers: No 00:28:20.564 Associated with SR-IOV VF: No 00:28:20.564 Max Data Transfer Size: 131072 00:28:20.564 Max Number of Namespaces: 0 00:28:20.564 Max Number of I/O Queues: 1024 00:28:20.564 NVMe Specification Version (VS): 1.3 00:28:20.564 NVMe Specification Version (Identify): 1.3 00:28:20.564 Maximum Queue Entries: 128 00:28:20.564 Contiguous Queues Required: Yes 00:28:20.564 Arbitration Mechanisms Supported 00:28:20.564 Weighted Round Robin: Not Supported 00:28:20.564 Vendor Specific: Not Supported 00:28:20.564 Reset Timeout: 15000 ms 00:28:20.564 Doorbell Stride: 4 bytes 00:28:20.564 NVM Subsystem Reset: Not Supported 00:28:20.564 Command Sets Supported 00:28:20.564 NVM Command Set: Supported 00:28:20.564 Boot Partition: Not Supported 00:28:20.564 Memory Page Size Minimum: 4096 bytes 00:28:20.564 Memory Page Size Maximum: 4096 bytes 00:28:20.564 Persistent Memory Region: Not Supported 00:28:20.564 Optional Asynchronous Events Supported 00:28:20.564 Namespace Attribute Notices: Not Supported 00:28:20.564 Firmware Activation Notices: Not Supported 00:28:20.564 ANA Change Notices: Not Supported 00:28:20.564 PLE Aggregate Log Change Notices: Not Supported 00:28:20.564 LBA Status Info Alert Notices: Not Supported 00:28:20.564 EGE Aggregate Log Change Notices: Not Supported 00:28:20.564 Normal NVM Subsystem Shutdown event: Not Supported 00:28:20.564 Zone Descriptor Change Notices: Not Supported 00:28:20.564 Discovery Log Change Notices: Supported 00:28:20.564 Controller Attributes 00:28:20.564 128-bit Host Identifier: Not Supported 00:28:20.564 Non-Operational Permissive Mode: Not Supported 00:28:20.564 NVM Sets: Not Supported 00:28:20.564 Read Recovery Levels: Not Supported 00:28:20.564 Endurance Groups: Not Supported 00:28:20.564 Predictable Latency Mode: Not Supported 00:28:20.564 Traffic Based Keep ALive: Not Supported 00:28:20.564 Namespace Granularity: Not Supported 00:28:20.564 SQ Associations: Not Supported 00:28:20.564 UUID List: Not Supported 00:28:20.564 Multi-Domain Subsystem: Not Supported 00:28:20.564 Fixed Capacity Management: Not Supported 00:28:20.564 Variable Capacity Management: Not Supported 00:28:20.564 Delete Endurance Group: Not Supported 00:28:20.564 Delete NVM Set: Not Supported 00:28:20.564 Extended LBA Formats Supported: Not Supported 00:28:20.564 Flexible Data Placement Supported: Not Supported 00:28:20.564 00:28:20.564 Controller Memory Buffer Support 00:28:20.564 ================================ 00:28:20.564 Supported: No 00:28:20.564 00:28:20.564 Persistent Memory Region Support 00:28:20.564 ================================ 00:28:20.564 Supported: No 00:28:20.564 00:28:20.564 Admin Command Set Attributes 00:28:20.564 ============================ 00:28:20.564 Security Send/Receive: Not Supported 00:28:20.564 Format NVM: Not Supported 00:28:20.564 Firmware Activate/Download: Not Supported 00:28:20.564 Namespace Management: Not Supported 00:28:20.564 Device Self-Test: Not Supported 00:28:20.564 Directives: Not Supported 00:28:20.564 NVMe-MI: Not Supported 00:28:20.564 Virtualization Management: Not Supported 00:28:20.564 Doorbell Buffer Config: Not Supported 00:28:20.564 Get LBA Status Capability: Not Supported 00:28:20.564 Command & Feature Lockdown Capability: Not Supported 00:28:20.564 Abort Command Limit: 1 00:28:20.564 Async Event Request Limit: 4 00:28:20.564 Number of Firmware Slots: N/A 00:28:20.564 Firmware Slot 1 Read-Only: N/A 00:28:20.564 Firmware Activation Without Reset: N/A 00:28:20.564 Multiple Update Detection Support: N/A 00:28:20.564 Firmware Update Granularity: No Information Provided 00:28:20.564 Per-Namespace SMART Log: No 00:28:20.564 Asymmetric Namespace Access Log Page: Not Supported 00:28:20.564 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:20.564 Command Effects Log Page: Not Supported 00:28:20.564 Get Log Page Extended Data: Supported 00:28:20.564 Telemetry Log Pages: Not Supported 00:28:20.564 Persistent Event Log Pages: Not Supported 00:28:20.564 Supported Log Pages Log Page: May Support 00:28:20.564 Commands Supported & Effects Log Page: Not Supported 00:28:20.564 Feature Identifiers & Effects Log Page:May Support 00:28:20.564 NVMe-MI Commands & Effects Log Page: May Support 00:28:20.564 Data Area 4 for Telemetry Log: Not Supported 00:28:20.564 Error Log Page Entries Supported: 128 00:28:20.564 Keep Alive: Not Supported 00:28:20.564 00:28:20.564 NVM Command Set Attributes 00:28:20.564 ========================== 00:28:20.564 Submission Queue Entry Size 00:28:20.564 Max: 1 00:28:20.564 Min: 1 00:28:20.564 Completion Queue Entry Size 00:28:20.564 Max: 1 00:28:20.564 Min: 1 00:28:20.564 Number of Namespaces: 0 00:28:20.564 Compare Command: Not Supported 00:28:20.564 Write Uncorrectable Command: Not Supported 00:28:20.564 Dataset Management Command: Not Supported 00:28:20.564 Write Zeroes Command: Not Supported 00:28:20.564 Set Features Save Field: Not Supported 00:28:20.564 Reservations: Not Supported 00:28:20.564 Timestamp: Not Supported 00:28:20.564 Copy: Not Supported 00:28:20.564 Volatile Write Cache: Not Present 00:28:20.564 Atomic Write Unit (Normal): 1 00:28:20.564 Atomic Write Unit (PFail): 1 00:28:20.564 Atomic Compare & Write Unit: 1 00:28:20.564 Fused Compare & Write: Supported 00:28:20.564 Scatter-Gather List 00:28:20.564 SGL Command Set: Supported 00:28:20.564 SGL Keyed: Supported 00:28:20.564 SGL Bit Bucket Descriptor: Not Supported 00:28:20.565 SGL Metadata Pointer: Not Supported 00:28:20.565 Oversized SGL: Not Supported 00:28:20.565 SGL Metadata Address: Not Supported 00:28:20.565 SGL Offset: Supported 00:28:20.565 Transport SGL Data Block: Not Supported 00:28:20.565 Replay Protected Memory Block: Not Supported 00:28:20.565 00:28:20.565 Firmware Slot Information 00:28:20.565 ========================= 00:28:20.565 Active slot: 0 00:28:20.565 00:28:20.565 00:28:20.565 Error Log 00:28:20.565 ========= 00:28:20.565 00:28:20.565 Active Namespaces 00:28:20.565 ================= 00:28:20.565 Discovery Log Page 00:28:20.565 ================== 00:28:20.565 Generation Counter: 2 00:28:20.565 Number of Records: 2 00:28:20.565 Record Format: 0 00:28:20.565 00:28:20.565 Discovery Log Entry 0 00:28:20.565 ---------------------- 00:28:20.565 Transport Type: 3 (TCP) 00:28:20.565 Address Family: 1 (IPv4) 00:28:20.565 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:20.565 Entry Flags: 00:28:20.565 Duplicate Returned Information: 1 00:28:20.565 Explicit Persistent Connection Support for Discovery: 1 00:28:20.565 Transport Requirements: 00:28:20.565 Secure Channel: Not Required 00:28:20.565 Port ID: 0 (0x0000) 00:28:20.565 Controller ID: 65535 (0xffff) 00:28:20.565 Admin Max SQ Size: 128 00:28:20.565 Transport Service Identifier: 4420 00:28:20.565 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:20.565 Transport Address: 10.0.0.2 00:28:20.565 Discovery Log Entry 1 00:28:20.565 ---------------------- 00:28:20.565 Transport Type: 3 (TCP) 00:28:20.565 Address Family: 1 (IPv4) 00:28:20.565 Subsystem Type: 2 (NVM Subsystem) 00:28:20.565 Entry Flags: 00:28:20.565 Duplicate Returned Information: 0 00:28:20.565 Explicit Persistent Connection Support for Discovery: 0 00:28:20.565 Transport Requirements: 00:28:20.565 Secure Channel: Not Required 00:28:20.565 Port ID: 0 (0x0000) 00:28:20.565 Controller ID: 65535 (0xffff) 00:28:20.565 Admin Max SQ Size: 128 00:28:20.565 Transport Service Identifier: 4420 00:28:20.565 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:20.565 Transport Address: 10.0.0.2 [2024-04-17 10:24:53.761944] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:20.565 [2024-04-17 10:24:53.761961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.565 [2024-04-17 10:24:53.761970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.565 [2024-04-17 10:24:53.761978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.565 [2024-04-17 10:24:53.761985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.565 [2024-04-17 10:24:53.761999] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.565 [2024-04-17 10:24:53.762005] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.565 [2024-04-17 10:24:53.762009] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22fc9e0) 00:28:20.565 [2024-04-17 10:24:53.762019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.565 [2024-04-17 10:24:53.762037] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364b50, cid 3, qid 0 00:28:20.565 [2024-04-17 10:24:53.762135] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.565 [2024-04-17 10:24:53.762144] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.565 [2024-04-17 10:24:53.762148] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.565 [2024-04-17 10:24:53.762153] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2364b50) on tqpair=0x22fc9e0 00:28:20.565 [2024-04-17 10:24:53.762162] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.565 [2024-04-17 10:24:53.762167] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.565 [2024-04-17 10:24:53.762172] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22fc9e0) 00:28:20.565 [2024-04-17 10:24:53.762181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.565 [2024-04-17 10:24:53.762199] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364b50, cid 3, qid 0 00:28:20.565 [2024-04-17 10:24:53.762333] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.565 [2024-04-17 10:24:53.762342] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.565 [2024-04-17 10:24:53.762346] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.565 [2024-04-17 10:24:53.762354] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2364b50) on tqpair=0x22fc9e0 00:28:20.565 [2024-04-17 10:24:53.762361] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:20.565 [2024-04-17 10:24:53.762366] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:20.565 [2024-04-17 10:24:53.762378] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.565 [2024-04-17 10:24:53.762384] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.565 [2024-04-17 10:24:53.762388] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22fc9e0) 00:28:20.565 [2024-04-17 10:24:53.762397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.565 [2024-04-17 10:24:53.762410] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364b50, cid 3, qid 0 00:28:20.565 [2024-04-17 10:24:53.762503] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.565 [2024-04-17 10:24:53.762511] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.565 [2024-04-17 10:24:53.762516] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.565 [2024-04-17 10:24:53.762520] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2364b50) on tqpair=0x22fc9e0 00:28:20.565 [2024-04-17 10:24:53.762533] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.565 [2024-04-17 10:24:53.762539] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.565 [2024-04-17 10:24:53.762543] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22fc9e0) 00:28:20.565 [2024-04-17 10:24:53.762552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.565 [2024-04-17 10:24:53.762564] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364b50, cid 3, qid 0 00:28:20.565 [2024-04-17 10:24:53.762663] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.565 [2024-04-17 10:24:53.762672] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.565 [2024-04-17 10:24:53.762676] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.565 [2024-04-17 10:24:53.762681] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2364b50) on tqpair=0x22fc9e0 00:28:20.565 [2024-04-17 10:24:53.762694] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.565 [2024-04-17 10:24:53.762700] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.565 [2024-04-17 10:24:53.762704] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22fc9e0) 00:28:20.565 [2024-04-17 10:24:53.762713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.565 [2024-04-17 10:24:53.762726] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364b50, cid 3, qid 0 00:28:20.565 [2024-04-17 10:24:53.762835] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.565 [2024-04-17 10:24:53.762843] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.565 [2024-04-17 10:24:53.762847] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.565 [2024-04-17 10:24:53.762852] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2364b50) on tqpair=0x22fc9e0 00:28:20.565 [2024-04-17 10:24:53.762865] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.565 [2024-04-17 10:24:53.762870] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.565 [2024-04-17 10:24:53.762875] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22fc9e0) 00:28:20.565 [2024-04-17 10:24:53.762883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.565 [2024-04-17 10:24:53.762896] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364b50, cid 3, qid 0 00:28:20.565 [2024-04-17 10:24:53.762989] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.565 [2024-04-17 10:24:53.763000] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.565 [2024-04-17 10:24:53.763004] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.565 [2024-04-17 10:24:53.763009] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2364b50) on tqpair=0x22fc9e0 00:28:20.565 [2024-04-17 10:24:53.763022] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.565 [2024-04-17 10:24:53.763026] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.565 [2024-04-17 10:24:53.763031] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22fc9e0) 00:28:20.565 [2024-04-17 10:24:53.763040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.565 [2024-04-17 10:24:53.763053] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364b50, cid 3, qid 0 00:28:20.565 [2024-04-17 10:24:53.763152] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.565 [2024-04-17 10:24:53.763160] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.565 [2024-04-17 10:24:53.763164] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.565 [2024-04-17 10:24:53.763169] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2364b50) on tqpair=0x22fc9e0 00:28:20.565 [2024-04-17 10:24:53.763182] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.565 [2024-04-17 10:24:53.763187] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.565 [2024-04-17 10:24:53.763191] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22fc9e0) 00:28:20.565 [2024-04-17 10:24:53.763200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.565 [2024-04-17 10:24:53.763212] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364b50, cid 3, qid 0 00:28:20.566 [2024-04-17 10:24:53.763349] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.566 [2024-04-17 10:24:53.763357] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.566 [2024-04-17 10:24:53.763361] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.566 [2024-04-17 10:24:53.763366] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2364b50) on tqpair=0x22fc9e0 00:28:20.566 [2024-04-17 10:24:53.763379] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.566 [2024-04-17 10:24:53.763384] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.566 [2024-04-17 10:24:53.763389] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22fc9e0) 00:28:20.566 [2024-04-17 10:24:53.763397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.566 [2024-04-17 10:24:53.763410] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364b50, cid 3, qid 0 00:28:20.566 [2024-04-17 10:24:53.763509] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.566 [2024-04-17 10:24:53.763517] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.566 [2024-04-17 10:24:53.763521] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.566 [2024-04-17 10:24:53.763526] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2364b50) on tqpair=0x22fc9e0 00:28:20.566 [2024-04-17 10:24:53.763539] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.566 [2024-04-17 10:24:53.763544] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.566 [2024-04-17 10:24:53.763549] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22fc9e0) 00:28:20.566 [2024-04-17 10:24:53.763557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.566 [2024-04-17 10:24:53.763570] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364b50, cid 3, qid 0 00:28:20.566 [2024-04-17 10:24:53.767652] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.566 [2024-04-17 10:24:53.767668] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.566 [2024-04-17 10:24:53.767672] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.566 [2024-04-17 10:24:53.767677] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2364b50) on tqpair=0x22fc9e0 00:28:20.566 [2024-04-17 10:24:53.767692] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.566 [2024-04-17 10:24:53.767698] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.566 [2024-04-17 10:24:53.767702] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22fc9e0) 00:28:20.566 [2024-04-17 10:24:53.767711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.566 [2024-04-17 10:24:53.767727] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2364b50, cid 3, qid 0 00:28:20.566 [2024-04-17 10:24:53.767943] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.566 [2024-04-17 10:24:53.767951] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.566 [2024-04-17 10:24:53.767955] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.566 [2024-04-17 10:24:53.767960] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2364b50) on tqpair=0x22fc9e0 00:28:20.566 [2024-04-17 10:24:53.767971] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:28:20.566 00:28:20.566 10:24:53 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:20.566 [2024-04-17 10:24:53.809562] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:20.566 [2024-04-17 10:24:53.809606] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3585717 ] 00:28:20.566 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.566 [2024-04-17 10:24:53.847843] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:20.566 [2024-04-17 10:24:53.847898] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:20.566 [2024-04-17 10:24:53.847905] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:20.566 [2024-04-17 10:24:53.847922] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:20.566 [2024-04-17 10:24:53.847931] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:20.566 [2024-04-17 10:24:53.848260] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:20.566 [2024-04-17 10:24:53.848293] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x23909e0 0 00:28:20.566 [2024-04-17 10:24:53.862650] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:20.566 [2024-04-17 10:24:53.862665] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:20.566 [2024-04-17 10:24:53.862670] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:20.566 [2024-04-17 10:24:53.862675] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:20.566 [2024-04-17 10:24:53.862714] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.566 [2024-04-17 10:24:53.862721] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.566 [2024-04-17 10:24:53.862726] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23909e0) 00:28:20.566 [2024-04-17 10:24:53.862739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:20.566 [2024-04-17 10:24:53.862763] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8730, cid 0, qid 0 00:28:20.566 [2024-04-17 10:24:53.870657] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.566 [2024-04-17 10:24:53.870668] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.566 [2024-04-17 10:24:53.870673] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.566 [2024-04-17 10:24:53.870678] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8730) on tqpair=0x23909e0 00:28:20.566 [2024-04-17 10:24:53.870693] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:20.566 [2024-04-17 10:24:53.870701] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:20.566 [2024-04-17 10:24:53.870708] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:20.566 [2024-04-17 10:24:53.870724] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.566 [2024-04-17 10:24:53.870730] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.566 [2024-04-17 10:24:53.870734] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23909e0) 00:28:20.566 [2024-04-17 10:24:53.870744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.566 [2024-04-17 10:24:53.870761] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8730, cid 0, qid 0 00:28:20.566 [2024-04-17 10:24:53.870915] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.566 [2024-04-17 10:24:53.870924] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.566 [2024-04-17 10:24:53.870928] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.566 [2024-04-17 10:24:53.870933] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8730) on tqpair=0x23909e0 00:28:20.566 [2024-04-17 10:24:53.870943] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:20.566 [2024-04-17 10:24:53.870953] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:20.566 [2024-04-17 10:24:53.870962] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.566 [2024-04-17 10:24:53.870967] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.566 [2024-04-17 10:24:53.870972] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23909e0) 00:28:20.566 [2024-04-17 10:24:53.870980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.566 [2024-04-17 10:24:53.870994] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8730, cid 0, qid 0 00:28:20.566 [2024-04-17 10:24:53.871071] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.566 [2024-04-17 10:24:53.871079] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.566 [2024-04-17 10:24:53.871084] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.566 [2024-04-17 10:24:53.871088] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8730) on tqpair=0x23909e0 00:28:20.566 [2024-04-17 10:24:53.871096] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:20.566 [2024-04-17 10:24:53.871106] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:20.566 [2024-04-17 10:24:53.871115] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.566 [2024-04-17 10:24:53.871120] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.566 [2024-04-17 10:24:53.871124] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23909e0) 00:28:20.566 [2024-04-17 10:24:53.871132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.566 [2024-04-17 10:24:53.871146] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8730, cid 0, qid 0 00:28:20.566 [2024-04-17 10:24:53.871225] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.566 [2024-04-17 10:24:53.871234] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.566 [2024-04-17 10:24:53.871238] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.566 [2024-04-17 10:24:53.871243] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8730) on tqpair=0x23909e0 00:28:20.566 [2024-04-17 10:24:53.871251] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:20.566 [2024-04-17 10:24:53.871263] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.566 [2024-04-17 10:24:53.871268] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.566 [2024-04-17 10:24:53.871273] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23909e0) 00:28:20.566 [2024-04-17 10:24:53.871281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.566 [2024-04-17 10:24:53.871294] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8730, cid 0, qid 0 00:28:20.566 [2024-04-17 10:24:53.871371] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.566 [2024-04-17 10:24:53.871379] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.566 [2024-04-17 10:24:53.871383] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.566 [2024-04-17 10:24:53.871388] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8730) on tqpair=0x23909e0 00:28:20.566 [2024-04-17 10:24:53.871394] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:20.567 [2024-04-17 10:24:53.871401] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:20.567 [2024-04-17 10:24:53.871411] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:20.567 [2024-04-17 10:24:53.871517] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:20.567 [2024-04-17 10:24:53.871522] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:20.567 [2024-04-17 10:24:53.871532] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.567 [2024-04-17 10:24:53.871536] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.567 [2024-04-17 10:24:53.871541] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23909e0) 00:28:20.567 [2024-04-17 10:24:53.871549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.567 [2024-04-17 10:24:53.871563] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8730, cid 0, qid 0 00:28:20.567 [2024-04-17 10:24:53.871639] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.567 [2024-04-17 10:24:53.871656] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.567 [2024-04-17 10:24:53.871661] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.567 [2024-04-17 10:24:53.871665] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8730) on tqpair=0x23909e0 00:28:20.567 [2024-04-17 10:24:53.871673] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:20.567 [2024-04-17 10:24:53.871685] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.567 [2024-04-17 10:24:53.871690] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.567 [2024-04-17 10:24:53.871695] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23909e0) 00:28:20.567 [2024-04-17 10:24:53.871703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.567 [2024-04-17 10:24:53.871720] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8730, cid 0, qid 0 00:28:20.567 [2024-04-17 10:24:53.871798] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.567 [2024-04-17 10:24:53.871806] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.567 [2024-04-17 10:24:53.871810] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.567 [2024-04-17 10:24:53.871815] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8730) on tqpair=0x23909e0 00:28:20.567 [2024-04-17 10:24:53.871822] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:20.567 [2024-04-17 10:24:53.871828] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:20.567 [2024-04-17 10:24:53.871838] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:20.567 [2024-04-17 10:24:53.871848] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:20.567 [2024-04-17 10:24:53.871858] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.567 [2024-04-17 10:24:53.871864] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.567 [2024-04-17 10:24:53.871868] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23909e0) 00:28:20.567 [2024-04-17 10:24:53.871877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.567 [2024-04-17 10:24:53.871890] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8730, cid 0, qid 0 00:28:20.567 [2024-04-17 10:24:53.872004] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:20.567 [2024-04-17 10:24:53.872013] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:20.567 [2024-04-17 10:24:53.872018] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:20.567 [2024-04-17 10:24:53.872022] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23909e0): datao=0, datal=4096, cccid=0 00:28:20.567 [2024-04-17 10:24:53.872028] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23f8730) on tqpair(0x23909e0): expected_datao=0, payload_size=4096 00:28:20.567 [2024-04-17 10:24:53.872038] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:20.567 [2024-04-17 10:24:53.872043] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:20.567 [2024-04-17 10:24:53.872091] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.567 [2024-04-17 10:24:53.872099] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.567 [2024-04-17 10:24:53.872104] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.567 [2024-04-17 10:24:53.872109] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8730) on tqpair=0x23909e0 00:28:20.567 [2024-04-17 10:24:53.872118] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:20.567 [2024-04-17 10:24:53.872127] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:20.567 [2024-04-17 10:24:53.872133] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:20.567 [2024-04-17 10:24:53.872138] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:20.567 [2024-04-17 10:24:53.872144] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:20.567 [2024-04-17 10:24:53.872150] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:20.567 [2024-04-17 10:24:53.872161] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:20.567 [2024-04-17 10:24:53.872169] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.567 [2024-04-17 10:24:53.872176] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.567 [2024-04-17 10:24:53.872181] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23909e0) 00:28:20.567 [2024-04-17 10:24:53.872190] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:20.567 [2024-04-17 10:24:53.872204] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8730, cid 0, qid 0 00:28:20.567 [2024-04-17 10:24:53.872282] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.567 [2024-04-17 10:24:53.872291] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.567 [2024-04-17 10:24:53.872295] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.567 [2024-04-17 10:24:53.872300] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8730) on tqpair=0x23909e0 00:28:20.567 [2024-04-17 10:24:53.872308] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.567 [2024-04-17 10:24:53.872313] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.567 [2024-04-17 10:24:53.872318] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23909e0) 00:28:20.567 [2024-04-17 10:24:53.872325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.567 [2024-04-17 10:24:53.872333] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.567 [2024-04-17 10:24:53.872337] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.567 [2024-04-17 10:24:53.872342] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x23909e0) 00:28:20.567 [2024-04-17 10:24:53.872349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.567 [2024-04-17 10:24:53.872357] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.567 [2024-04-17 10:24:53.872362] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.567 [2024-04-17 10:24:53.872366] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x23909e0) 00:28:20.567 [2024-04-17 10:24:53.872373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.567 [2024-04-17 10:24:53.872380] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.567 [2024-04-17 10:24:53.872385] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.567 [2024-04-17 10:24:53.872390] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23909e0) 00:28:20.567 [2024-04-17 10:24:53.872397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.567 [2024-04-17 10:24:53.872403] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:20.567 [2024-04-17 10:24:53.872416] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:20.568 [2024-04-17 10:24:53.872425] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.872429] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.872433] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23909e0) 00:28:20.568 [2024-04-17 10:24:53.872442] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.568 [2024-04-17 10:24:53.872457] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8730, cid 0, qid 0 00:28:20.568 [2024-04-17 10:24:53.872464] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8890, cid 1, qid 0 00:28:20.568 [2024-04-17 10:24:53.872470] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f89f0, cid 2, qid 0 00:28:20.568 [2024-04-17 10:24:53.872476] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8b50, cid 3, qid 0 00:28:20.568 [2024-04-17 10:24:53.872484] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8cb0, cid 4, qid 0 00:28:20.568 [2024-04-17 10:24:53.872581] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.568 [2024-04-17 10:24:53.872589] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.568 [2024-04-17 10:24:53.872594] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.872598] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8cb0) on tqpair=0x23909e0 00:28:20.568 [2024-04-17 10:24:53.872605] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:20.568 [2024-04-17 10:24:53.872612] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:20.568 [2024-04-17 10:24:53.872622] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:20.568 [2024-04-17 10:24:53.872630] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:20.568 [2024-04-17 10:24:53.872637] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.872642] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.872654] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23909e0) 00:28:20.568 [2024-04-17 10:24:53.872662] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:20.568 [2024-04-17 10:24:53.872676] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8cb0, cid 4, qid 0 00:28:20.568 [2024-04-17 10:24:53.872769] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.568 [2024-04-17 10:24:53.872778] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.568 [2024-04-17 10:24:53.872782] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.872787] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8cb0) on tqpair=0x23909e0 00:28:20.568 [2024-04-17 10:24:53.872849] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:20.568 [2024-04-17 10:24:53.872862] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:20.568 [2024-04-17 10:24:53.872871] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.872876] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.872880] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23909e0) 00:28:20.568 [2024-04-17 10:24:53.872888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.568 [2024-04-17 10:24:53.872903] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8cb0, cid 4, qid 0 00:28:20.568 [2024-04-17 10:24:53.872988] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:20.568 [2024-04-17 10:24:53.872997] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:20.568 [2024-04-17 10:24:53.873001] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.873006] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23909e0): datao=0, datal=4096, cccid=4 00:28:20.568 [2024-04-17 10:24:53.873011] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23f8cb0) on tqpair(0x23909e0): expected_datao=0, payload_size=4096 00:28:20.568 [2024-04-17 10:24:53.873080] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.873085] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.873135] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.568 [2024-04-17 10:24:53.873147] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.568 [2024-04-17 10:24:53.873151] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.873155] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8cb0) on tqpair=0x23909e0 00:28:20.568 [2024-04-17 10:24:53.873167] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:20.568 [2024-04-17 10:24:53.873186] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:20.568 [2024-04-17 10:24:53.873198] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:20.568 [2024-04-17 10:24:53.873207] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.873212] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.873217] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23909e0) 00:28:20.568 [2024-04-17 10:24:53.873225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.568 [2024-04-17 10:24:53.873239] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8cb0, cid 4, qid 0 00:28:20.568 [2024-04-17 10:24:53.873349] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:20.568 [2024-04-17 10:24:53.873358] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:20.568 [2024-04-17 10:24:53.873363] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.873367] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23909e0): datao=0, datal=4096, cccid=4 00:28:20.568 [2024-04-17 10:24:53.873373] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23f8cb0) on tqpair(0x23909e0): expected_datao=0, payload_size=4096 00:28:20.568 [2024-04-17 10:24:53.873382] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.873387] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.873423] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.568 [2024-04-17 10:24:53.873431] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.568 [2024-04-17 10:24:53.873435] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.873440] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8cb0) on tqpair=0x23909e0 00:28:20.568 [2024-04-17 10:24:53.873455] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:20.568 [2024-04-17 10:24:53.873468] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:20.568 [2024-04-17 10:24:53.873477] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.873482] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.873487] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23909e0) 00:28:20.568 [2024-04-17 10:24:53.873495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.568 [2024-04-17 10:24:53.873509] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8cb0, cid 4, qid 0 00:28:20.568 [2024-04-17 10:24:53.873627] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:20.568 [2024-04-17 10:24:53.873638] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:20.568 [2024-04-17 10:24:53.873648] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.873653] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23909e0): datao=0, datal=4096, cccid=4 00:28:20.568 [2024-04-17 10:24:53.873659] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23f8cb0) on tqpair(0x23909e0): expected_datao=0, payload_size=4096 00:28:20.568 [2024-04-17 10:24:53.873672] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.873677] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.873706] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.568 [2024-04-17 10:24:53.873714] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.568 [2024-04-17 10:24:53.873718] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.873723] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8cb0) on tqpair=0x23909e0 00:28:20.568 [2024-04-17 10:24:53.873733] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:20.568 [2024-04-17 10:24:53.873744] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:20.568 [2024-04-17 10:24:53.873754] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:20.568 [2024-04-17 10:24:53.873762] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:20.568 [2024-04-17 10:24:53.873768] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:20.568 [2024-04-17 10:24:53.873774] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:20.568 [2024-04-17 10:24:53.873780] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:20.568 [2024-04-17 10:24:53.873787] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:20.568 [2024-04-17 10:24:53.873803] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.873808] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.873813] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23909e0) 00:28:20.568 [2024-04-17 10:24:53.873821] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.568 [2024-04-17 10:24:53.873829] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.873834] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.568 [2024-04-17 10:24:53.873838] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x23909e0) 00:28:20.568 [2024-04-17 10:24:53.873846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.568 [2024-04-17 10:24:53.873864] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8cb0, cid 4, qid 0 00:28:20.568 [2024-04-17 10:24:53.873871] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8e10, cid 5, qid 0 00:28:20.569 [2024-04-17 10:24:53.873982] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.569 [2024-04-17 10:24:53.873991] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.569 [2024-04-17 10:24:53.873995] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.874000] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8cb0) on tqpair=0x23909e0 00:28:20.569 [2024-04-17 10:24:53.874009] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.569 [2024-04-17 10:24:53.874016] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.569 [2024-04-17 10:24:53.874021] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.874026] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8e10) on tqpair=0x23909e0 00:28:20.569 [2024-04-17 10:24:53.874039] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.874046] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.874051] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x23909e0) 00:28:20.569 [2024-04-17 10:24:53.874059] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.569 [2024-04-17 10:24:53.874072] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8e10, cid 5, qid 0 00:28:20.569 [2024-04-17 10:24:53.874152] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.569 [2024-04-17 10:24:53.874160] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.569 [2024-04-17 10:24:53.874164] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.874169] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8e10) on tqpair=0x23909e0 00:28:20.569 [2024-04-17 10:24:53.874181] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.874186] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.874191] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x23909e0) 00:28:20.569 [2024-04-17 10:24:53.874199] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.569 [2024-04-17 10:24:53.874212] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8e10, cid 5, qid 0 00:28:20.569 [2024-04-17 10:24:53.874294] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.569 [2024-04-17 10:24:53.874302] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.569 [2024-04-17 10:24:53.874307] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.874312] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8e10) on tqpair=0x23909e0 00:28:20.569 [2024-04-17 10:24:53.874323] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.874328] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.874333] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x23909e0) 00:28:20.569 [2024-04-17 10:24:53.874341] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.569 [2024-04-17 10:24:53.874353] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8e10, cid 5, qid 0 00:28:20.569 [2024-04-17 10:24:53.874436] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.569 [2024-04-17 10:24:53.874445] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.569 [2024-04-17 10:24:53.874449] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.874454] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8e10) on tqpair=0x23909e0 00:28:20.569 [2024-04-17 10:24:53.874469] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.874474] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.874479] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x23909e0) 00:28:20.569 [2024-04-17 10:24:53.874487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.569 [2024-04-17 10:24:53.874496] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.874501] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.874505] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23909e0) 00:28:20.569 [2024-04-17 10:24:53.874513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.569 [2024-04-17 10:24:53.874522] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.874529] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.874534] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x23909e0) 00:28:20.569 [2024-04-17 10:24:53.874542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.569 [2024-04-17 10:24:53.874551] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.874556] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.874560] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x23909e0) 00:28:20.569 [2024-04-17 10:24:53.874568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.569 [2024-04-17 10:24:53.874582] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8e10, cid 5, qid 0 00:28:20.569 [2024-04-17 10:24:53.874589] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8cb0, cid 4, qid 0 00:28:20.569 [2024-04-17 10:24:53.874595] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8f70, cid 6, qid 0 00:28:20.569 [2024-04-17 10:24:53.874601] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f90d0, cid 7, qid 0 00:28:20.569 [2024-04-17 10:24:53.878656] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:20.569 [2024-04-17 10:24:53.878668] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:20.569 [2024-04-17 10:24:53.878672] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.878677] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23909e0): datao=0, datal=8192, cccid=5 00:28:20.569 [2024-04-17 10:24:53.878683] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23f8e10) on tqpair(0x23909e0): expected_datao=0, payload_size=8192 00:28:20.569 [2024-04-17 10:24:53.878692] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.878697] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.878705] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:20.569 [2024-04-17 10:24:53.878712] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:20.569 [2024-04-17 10:24:53.878716] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.878721] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23909e0): datao=0, datal=512, cccid=4 00:28:20.569 [2024-04-17 10:24:53.878726] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23f8cb0) on tqpair(0x23909e0): expected_datao=0, payload_size=512 00:28:20.569 [2024-04-17 10:24:53.878735] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.878740] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.878747] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:20.569 [2024-04-17 10:24:53.878754] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:20.569 [2024-04-17 10:24:53.878758] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.878763] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23909e0): datao=0, datal=512, cccid=6 00:28:20.569 [2024-04-17 10:24:53.878768] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23f8f70) on tqpair(0x23909e0): expected_datao=0, payload_size=512 00:28:20.569 [2024-04-17 10:24:53.878777] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.878782] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.878789] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:20.569 [2024-04-17 10:24:53.878796] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:20.569 [2024-04-17 10:24:53.878800] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.878805] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23909e0): datao=0, datal=4096, cccid=7 00:28:20.569 [2024-04-17 10:24:53.878813] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23f90d0) on tqpair(0x23909e0): expected_datao=0, payload_size=4096 00:28:20.569 [2024-04-17 10:24:53.878822] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.878827] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.878834] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.569 [2024-04-17 10:24:53.878841] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.569 [2024-04-17 10:24:53.878845] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.878850] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8e10) on tqpair=0x23909e0 00:28:20.569 [2024-04-17 10:24:53.878867] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.569 [2024-04-17 10:24:53.878875] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.569 [2024-04-17 10:24:53.878879] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.878884] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8cb0) on tqpair=0x23909e0 00:28:20.569 [2024-04-17 10:24:53.878895] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.569 [2024-04-17 10:24:53.878903] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.569 [2024-04-17 10:24:53.878907] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.878912] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8f70) on tqpair=0x23909e0 00:28:20.569 [2024-04-17 10:24:53.878922] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.569 [2024-04-17 10:24:53.878929] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.569 [2024-04-17 10:24:53.878934] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.569 [2024-04-17 10:24:53.878939] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f90d0) on tqpair=0x23909e0 00:28:20.569 ===================================================== 00:28:20.569 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:20.569 ===================================================== 00:28:20.569 Controller Capabilities/Features 00:28:20.569 ================================ 00:28:20.569 Vendor ID: 8086 00:28:20.569 Subsystem Vendor ID: 8086 00:28:20.569 Serial Number: SPDK00000000000001 00:28:20.569 Model Number: SPDK bdev Controller 00:28:20.569 Firmware Version: 24.01.1 00:28:20.570 Recommended Arb Burst: 6 00:28:20.570 IEEE OUI Identifier: e4 d2 5c 00:28:20.570 Multi-path I/O 00:28:20.570 May have multiple subsystem ports: Yes 00:28:20.570 May have multiple controllers: Yes 00:28:20.570 Associated with SR-IOV VF: No 00:28:20.570 Max Data Transfer Size: 131072 00:28:20.570 Max Number of Namespaces: 32 00:28:20.570 Max Number of I/O Queues: 127 00:28:20.570 NVMe Specification Version (VS): 1.3 00:28:20.570 NVMe Specification Version (Identify): 1.3 00:28:20.570 Maximum Queue Entries: 128 00:28:20.570 Contiguous Queues Required: Yes 00:28:20.570 Arbitration Mechanisms Supported 00:28:20.570 Weighted Round Robin: Not Supported 00:28:20.570 Vendor Specific: Not Supported 00:28:20.570 Reset Timeout: 15000 ms 00:28:20.570 Doorbell Stride: 4 bytes 00:28:20.570 NVM Subsystem Reset: Not Supported 00:28:20.570 Command Sets Supported 00:28:20.570 NVM Command Set: Supported 00:28:20.570 Boot Partition: Not Supported 00:28:20.570 Memory Page Size Minimum: 4096 bytes 00:28:20.570 Memory Page Size Maximum: 4096 bytes 00:28:20.570 Persistent Memory Region: Not Supported 00:28:20.570 Optional Asynchronous Events Supported 00:28:20.570 Namespace Attribute Notices: Supported 00:28:20.570 Firmware Activation Notices: Not Supported 00:28:20.570 ANA Change Notices: Not Supported 00:28:20.570 PLE Aggregate Log Change Notices: Not Supported 00:28:20.570 LBA Status Info Alert Notices: Not Supported 00:28:20.570 EGE Aggregate Log Change Notices: Not Supported 00:28:20.570 Normal NVM Subsystem Shutdown event: Not Supported 00:28:20.570 Zone Descriptor Change Notices: Not Supported 00:28:20.570 Discovery Log Change Notices: Not Supported 00:28:20.570 Controller Attributes 00:28:20.570 128-bit Host Identifier: Supported 00:28:20.570 Non-Operational Permissive Mode: Not Supported 00:28:20.570 NVM Sets: Not Supported 00:28:20.570 Read Recovery Levels: Not Supported 00:28:20.570 Endurance Groups: Not Supported 00:28:20.570 Predictable Latency Mode: Not Supported 00:28:20.570 Traffic Based Keep ALive: Not Supported 00:28:20.570 Namespace Granularity: Not Supported 00:28:20.570 SQ Associations: Not Supported 00:28:20.570 UUID List: Not Supported 00:28:20.570 Multi-Domain Subsystem: Not Supported 00:28:20.570 Fixed Capacity Management: Not Supported 00:28:20.570 Variable Capacity Management: Not Supported 00:28:20.570 Delete Endurance Group: Not Supported 00:28:20.570 Delete NVM Set: Not Supported 00:28:20.570 Extended LBA Formats Supported: Not Supported 00:28:20.570 Flexible Data Placement Supported: Not Supported 00:28:20.570 00:28:20.570 Controller Memory Buffer Support 00:28:20.570 ================================ 00:28:20.570 Supported: No 00:28:20.570 00:28:20.570 Persistent Memory Region Support 00:28:20.570 ================================ 00:28:20.570 Supported: No 00:28:20.570 00:28:20.570 Admin Command Set Attributes 00:28:20.570 ============================ 00:28:20.570 Security Send/Receive: Not Supported 00:28:20.570 Format NVM: Not Supported 00:28:20.570 Firmware Activate/Download: Not Supported 00:28:20.570 Namespace Management: Not Supported 00:28:20.570 Device Self-Test: Not Supported 00:28:20.570 Directives: Not Supported 00:28:20.570 NVMe-MI: Not Supported 00:28:20.570 Virtualization Management: Not Supported 00:28:20.570 Doorbell Buffer Config: Not Supported 00:28:20.570 Get LBA Status Capability: Not Supported 00:28:20.570 Command & Feature Lockdown Capability: Not Supported 00:28:20.570 Abort Command Limit: 4 00:28:20.570 Async Event Request Limit: 4 00:28:20.570 Number of Firmware Slots: N/A 00:28:20.570 Firmware Slot 1 Read-Only: N/A 00:28:20.570 Firmware Activation Without Reset: N/A 00:28:20.570 Multiple Update Detection Support: N/A 00:28:20.570 Firmware Update Granularity: No Information Provided 00:28:20.570 Per-Namespace SMART Log: No 00:28:20.570 Asymmetric Namespace Access Log Page: Not Supported 00:28:20.570 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:20.570 Command Effects Log Page: Supported 00:28:20.570 Get Log Page Extended Data: Supported 00:28:20.570 Telemetry Log Pages: Not Supported 00:28:20.570 Persistent Event Log Pages: Not Supported 00:28:20.570 Supported Log Pages Log Page: May Support 00:28:20.570 Commands Supported & Effects Log Page: Not Supported 00:28:20.570 Feature Identifiers & Effects Log Page:May Support 00:28:20.570 NVMe-MI Commands & Effects Log Page: May Support 00:28:20.570 Data Area 4 for Telemetry Log: Not Supported 00:28:20.570 Error Log Page Entries Supported: 128 00:28:20.570 Keep Alive: Supported 00:28:20.570 Keep Alive Granularity: 10000 ms 00:28:20.570 00:28:20.570 NVM Command Set Attributes 00:28:20.570 ========================== 00:28:20.570 Submission Queue Entry Size 00:28:20.570 Max: 64 00:28:20.570 Min: 64 00:28:20.570 Completion Queue Entry Size 00:28:20.570 Max: 16 00:28:20.570 Min: 16 00:28:20.570 Number of Namespaces: 32 00:28:20.570 Compare Command: Supported 00:28:20.570 Write Uncorrectable Command: Not Supported 00:28:20.570 Dataset Management Command: Supported 00:28:20.570 Write Zeroes Command: Supported 00:28:20.570 Set Features Save Field: Not Supported 00:28:20.570 Reservations: Supported 00:28:20.570 Timestamp: Not Supported 00:28:20.570 Copy: Supported 00:28:20.570 Volatile Write Cache: Present 00:28:20.570 Atomic Write Unit (Normal): 1 00:28:20.570 Atomic Write Unit (PFail): 1 00:28:20.570 Atomic Compare & Write Unit: 1 00:28:20.570 Fused Compare & Write: Supported 00:28:20.570 Scatter-Gather List 00:28:20.570 SGL Command Set: Supported 00:28:20.570 SGL Keyed: Supported 00:28:20.570 SGL Bit Bucket Descriptor: Not Supported 00:28:20.570 SGL Metadata Pointer: Not Supported 00:28:20.570 Oversized SGL: Not Supported 00:28:20.570 SGL Metadata Address: Not Supported 00:28:20.570 SGL Offset: Supported 00:28:20.570 Transport SGL Data Block: Not Supported 00:28:20.570 Replay Protected Memory Block: Not Supported 00:28:20.570 00:28:20.570 Firmware Slot Information 00:28:20.570 ========================= 00:28:20.570 Active slot: 1 00:28:20.570 Slot 1 Firmware Revision: 24.01.1 00:28:20.570 00:28:20.570 00:28:20.570 Commands Supported and Effects 00:28:20.570 ============================== 00:28:20.570 Admin Commands 00:28:20.570 -------------- 00:28:20.570 Get Log Page (02h): Supported 00:28:20.570 Identify (06h): Supported 00:28:20.570 Abort (08h): Supported 00:28:20.570 Set Features (09h): Supported 00:28:20.570 Get Features (0Ah): Supported 00:28:20.570 Asynchronous Event Request (0Ch): Supported 00:28:20.570 Keep Alive (18h): Supported 00:28:20.570 I/O Commands 00:28:20.570 ------------ 00:28:20.570 Flush (00h): Supported LBA-Change 00:28:20.570 Write (01h): Supported LBA-Change 00:28:20.570 Read (02h): Supported 00:28:20.570 Compare (05h): Supported 00:28:20.570 Write Zeroes (08h): Supported LBA-Change 00:28:20.570 Dataset Management (09h): Supported LBA-Change 00:28:20.570 Copy (19h): Supported LBA-Change 00:28:20.570 Unknown (79h): Supported LBA-Change 00:28:20.570 Unknown (7Ah): Supported 00:28:20.570 00:28:20.570 Error Log 00:28:20.570 ========= 00:28:20.570 00:28:20.570 Arbitration 00:28:20.570 =========== 00:28:20.570 Arbitration Burst: 1 00:28:20.570 00:28:20.570 Power Management 00:28:20.570 ================ 00:28:20.570 Number of Power States: 1 00:28:20.570 Current Power State: Power State #0 00:28:20.570 Power State #0: 00:28:20.570 Max Power: 0.00 W 00:28:20.570 Non-Operational State: Operational 00:28:20.570 Entry Latency: Not Reported 00:28:20.570 Exit Latency: Not Reported 00:28:20.570 Relative Read Throughput: 0 00:28:20.570 Relative Read Latency: 0 00:28:20.570 Relative Write Throughput: 0 00:28:20.570 Relative Write Latency: 0 00:28:20.570 Idle Power: Not Reported 00:28:20.570 Active Power: Not Reported 00:28:20.570 Non-Operational Permissive Mode: Not Supported 00:28:20.570 00:28:20.570 Health Information 00:28:20.570 ================== 00:28:20.570 Critical Warnings: 00:28:20.570 Available Spare Space: OK 00:28:20.570 Temperature: OK 00:28:20.570 Device Reliability: OK 00:28:20.570 Read Only: No 00:28:20.570 Volatile Memory Backup: OK 00:28:20.570 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:20.570 Temperature Threshold: [2024-04-17 10:24:53.879063] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.570 [2024-04-17 10:24:53.879070] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.570 [2024-04-17 10:24:53.879075] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x23909e0) 00:28:20.570 [2024-04-17 10:24:53.879084] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.570 [2024-04-17 10:24:53.879100] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f90d0, cid 7, qid 0 00:28:20.570 [2024-04-17 10:24:53.879282] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.570 [2024-04-17 10:24:53.879291] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.570 [2024-04-17 10:24:53.879295] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.879300] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f90d0) on tqpair=0x23909e0 00:28:20.571 [2024-04-17 10:24:53.879338] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:20.571 [2024-04-17 10:24:53.879352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.571 [2024-04-17 10:24:53.879361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.571 [2024-04-17 10:24:53.879368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.571 [2024-04-17 10:24:53.879376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.571 [2024-04-17 10:24:53.879386] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.879391] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.879396] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23909e0) 00:28:20.571 [2024-04-17 10:24:53.879407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.571 [2024-04-17 10:24:53.879422] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8b50, cid 3, qid 0 00:28:20.571 [2024-04-17 10:24:53.879500] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.571 [2024-04-17 10:24:53.879509] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.571 [2024-04-17 10:24:53.879513] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.879518] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8b50) on tqpair=0x23909e0 00:28:20.571 [2024-04-17 10:24:53.879527] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.879532] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.879536] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23909e0) 00:28:20.571 [2024-04-17 10:24:53.879545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.571 [2024-04-17 10:24:53.879562] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8b50, cid 3, qid 0 00:28:20.571 [2024-04-17 10:24:53.879651] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.571 [2024-04-17 10:24:53.879660] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.571 [2024-04-17 10:24:53.879664] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.879669] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8b50) on tqpair=0x23909e0 00:28:20.571 [2024-04-17 10:24:53.879676] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:20.571 [2024-04-17 10:24:53.879682] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:20.571 [2024-04-17 10:24:53.879694] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.879699] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.879704] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23909e0) 00:28:20.571 [2024-04-17 10:24:53.879713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.571 [2024-04-17 10:24:53.879726] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8b50, cid 3, qid 0 00:28:20.571 [2024-04-17 10:24:53.879806] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.571 [2024-04-17 10:24:53.879814] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.571 [2024-04-17 10:24:53.879819] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.879823] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8b50) on tqpair=0x23909e0 00:28:20.571 [2024-04-17 10:24:53.879836] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.879842] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.879846] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23909e0) 00:28:20.571 [2024-04-17 10:24:53.879854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.571 [2024-04-17 10:24:53.879867] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8b50, cid 3, qid 0 00:28:20.571 [2024-04-17 10:24:53.879950] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.571 [2024-04-17 10:24:53.879958] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.571 [2024-04-17 10:24:53.879962] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.879967] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8b50) on tqpair=0x23909e0 00:28:20.571 [2024-04-17 10:24:53.879980] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.879988] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.879992] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23909e0) 00:28:20.571 [2024-04-17 10:24:53.880001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.571 [2024-04-17 10:24:53.880013] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8b50, cid 3, qid 0 00:28:20.571 [2024-04-17 10:24:53.880093] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.571 [2024-04-17 10:24:53.880101] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.571 [2024-04-17 10:24:53.880106] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.880111] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8b50) on tqpair=0x23909e0 00:28:20.571 [2024-04-17 10:24:53.880123] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.880128] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.880133] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23909e0) 00:28:20.571 [2024-04-17 10:24:53.880141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.571 [2024-04-17 10:24:53.880154] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8b50, cid 3, qid 0 00:28:20.571 [2024-04-17 10:24:53.880238] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.571 [2024-04-17 10:24:53.880246] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.571 [2024-04-17 10:24:53.880250] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.880255] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8b50) on tqpair=0x23909e0 00:28:20.571 [2024-04-17 10:24:53.880268] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.880273] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.880278] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23909e0) 00:28:20.571 [2024-04-17 10:24:53.880286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.571 [2024-04-17 10:24:53.880299] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8b50, cid 3, qid 0 00:28:20.571 [2024-04-17 10:24:53.880375] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.571 [2024-04-17 10:24:53.880383] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.571 [2024-04-17 10:24:53.880388] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.880393] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8b50) on tqpair=0x23909e0 00:28:20.571 [2024-04-17 10:24:53.880405] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.880410] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.880415] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23909e0) 00:28:20.571 [2024-04-17 10:24:53.880423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.571 [2024-04-17 10:24:53.880436] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8b50, cid 3, qid 0 00:28:20.571 [2024-04-17 10:24:53.880527] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.571 [2024-04-17 10:24:53.880536] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.571 [2024-04-17 10:24:53.880540] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.880545] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8b50) on tqpair=0x23909e0 00:28:20.571 [2024-04-17 10:24:53.880559] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.880564] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.880572] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23909e0) 00:28:20.571 [2024-04-17 10:24:53.880580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.571 [2024-04-17 10:24:53.880593] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8b50, cid 3, qid 0 00:28:20.571 [2024-04-17 10:24:53.880725] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.571 [2024-04-17 10:24:53.880734] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.571 [2024-04-17 10:24:53.880738] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.880743] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8b50) on tqpair=0x23909e0 00:28:20.571 [2024-04-17 10:24:53.880757] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.880762] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.880767] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23909e0) 00:28:20.571 [2024-04-17 10:24:53.880775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.571 [2024-04-17 10:24:53.880789] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8b50, cid 3, qid 0 00:28:20.571 [2024-04-17 10:24:53.880889] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.571 [2024-04-17 10:24:53.880897] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.571 [2024-04-17 10:24:53.880902] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.880907] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8b50) on tqpair=0x23909e0 00:28:20.571 [2024-04-17 10:24:53.880920] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.880925] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.571 [2024-04-17 10:24:53.880930] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23909e0) 00:28:20.571 [2024-04-17 10:24:53.880938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.571 [2024-04-17 10:24:53.880951] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8b50, cid 3, qid 0 00:28:20.571 [2024-04-17 10:24:53.881035] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.572 [2024-04-17 10:24:53.881043] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.572 [2024-04-17 10:24:53.881048] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.572 [2024-04-17 10:24:53.881052] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8b50) on tqpair=0x23909e0 00:28:20.572 [2024-04-17 10:24:53.881064] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.572 [2024-04-17 10:24:53.881070] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.572 [2024-04-17 10:24:53.881074] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23909e0) 00:28:20.572 [2024-04-17 10:24:53.881083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.572 [2024-04-17 10:24:53.881095] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8b50, cid 3, qid 0 00:28:20.572 [2024-04-17 10:24:53.881175] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.572 [2024-04-17 10:24:53.881183] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.572 [2024-04-17 10:24:53.881187] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.572 [2024-04-17 10:24:53.881192] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8b50) on tqpair=0x23909e0 00:28:20.572 [2024-04-17 10:24:53.881205] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.572 [2024-04-17 10:24:53.881210] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.572 [2024-04-17 10:24:53.881214] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23909e0) 00:28:20.572 [2024-04-17 10:24:53.881225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.572 [2024-04-17 10:24:53.881238] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8b50, cid 3, qid 0 00:28:20.572 [2024-04-17 10:24:53.881317] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.572 [2024-04-17 10:24:53.881325] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.572 [2024-04-17 10:24:53.881330] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.572 [2024-04-17 10:24:53.881334] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8b50) on tqpair=0x23909e0 00:28:20.572 [2024-04-17 10:24:53.881347] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.572 [2024-04-17 10:24:53.881352] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.572 [2024-04-17 10:24:53.881356] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23909e0) 00:28:20.572 [2024-04-17 10:24:53.881365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.572 [2024-04-17 10:24:53.881378] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8b50, cid 3, qid 0 00:28:20.572 [2024-04-17 10:24:53.881453] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.572 [2024-04-17 10:24:53.881462] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.572 [2024-04-17 10:24:53.881466] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.572 [2024-04-17 10:24:53.881471] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8b50) on tqpair=0x23909e0 00:28:20.572 [2024-04-17 10:24:53.881484] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.572 [2024-04-17 10:24:53.881489] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.572 [2024-04-17 10:24:53.881493] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23909e0) 00:28:20.572 [2024-04-17 10:24:53.881502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.572 [2024-04-17 10:24:53.881514] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8b50, cid 3, qid 0 00:28:20.572 [2024-04-17 10:24:53.881590] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.572 [2024-04-17 10:24:53.881599] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.572 [2024-04-17 10:24:53.881603] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.572 [2024-04-17 10:24:53.881608] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8b50) on tqpair=0x23909e0 00:28:20.572 [2024-04-17 10:24:53.881620] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.572 [2024-04-17 10:24:53.881625] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.572 [2024-04-17 10:24:53.881630] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23909e0) 00:28:20.572 [2024-04-17 10:24:53.881638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.572 [2024-04-17 10:24:53.881656] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8b50, cid 3, qid 0 00:28:20.572 [2024-04-17 10:24:53.881737] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.572 [2024-04-17 10:24:53.881746] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.572 [2024-04-17 10:24:53.881750] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.572 [2024-04-17 10:24:53.881754] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8b50) on tqpair=0x23909e0 00:28:20.572 [2024-04-17 10:24:53.881767] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.572 [2024-04-17 10:24:53.881772] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.572 [2024-04-17 10:24:53.881777] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23909e0) 00:28:20.572 [2024-04-17 10:24:53.881788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.572 [2024-04-17 10:24:53.881801] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8b50, cid 3, qid 0 00:28:20.572 [2024-04-17 10:24:53.881877] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.572 [2024-04-17 10:24:53.881886] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.572 [2024-04-17 10:24:53.881891] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.572 [2024-04-17 10:24:53.881895] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8b50) on tqpair=0x23909e0 00:28:20.572 [2024-04-17 10:24:53.881908] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.572 [2024-04-17 10:24:53.881913] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.572 [2024-04-17 10:24:53.881917] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23909e0) 00:28:20.572 [2024-04-17 10:24:53.881926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.572 [2024-04-17 10:24:53.881938] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8b50, cid 3, qid 0 00:28:20.572 [2024-04-17 10:24:53.882014] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.572 [2024-04-17 10:24:53.882022] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.572 [2024-04-17 10:24:53.882027] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.572 [2024-04-17 10:24:53.882031] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8b50) on tqpair=0x23909e0 00:28:20.572 [2024-04-17 10:24:53.882044] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.572 [2024-04-17 10:24:53.882049] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.572 [2024-04-17 10:24:53.882054] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23909e0) 00:28:20.572 [2024-04-17 10:24:53.882062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.572 [2024-04-17 10:24:53.882075] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8b50, cid 3, qid 0 00:28:20.572 [2024-04-17 10:24:53.882155] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.572 [2024-04-17 10:24:53.882163] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.572 [2024-04-17 10:24:53.882168] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.572 [2024-04-17 10:24:53.882173] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8b50) on tqpair=0x23909e0 00:28:20.572 [2024-04-17 10:24:53.882185] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.572 [2024-04-17 10:24:53.882190] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.572 [2024-04-17 10:24:53.882195] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23909e0) 00:28:20.572 [2024-04-17 10:24:53.882203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.572 [2024-04-17 10:24:53.882215] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8b50, cid 3, qid 0 00:28:20.572 [2024-04-17 10:24:53.882296] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.572 [2024-04-17 10:24:53.882305] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.572 [2024-04-17 10:24:53.882309] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.573 [2024-04-17 10:24:53.882313] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8b50) on tqpair=0x23909e0 00:28:20.573 [2024-04-17 10:24:53.882326] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.573 [2024-04-17 10:24:53.882331] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.573 [2024-04-17 10:24:53.882335] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23909e0) 00:28:20.573 [2024-04-17 10:24:53.882344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.573 [2024-04-17 10:24:53.882359] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8b50, cid 3, qid 0 00:28:20.573 [2024-04-17 10:24:53.882443] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.573 [2024-04-17 10:24:53.882452] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.573 [2024-04-17 10:24:53.882456] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.573 [2024-04-17 10:24:53.882461] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8b50) on tqpair=0x23909e0 00:28:20.573 [2024-04-17 10:24:53.882473] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.573 [2024-04-17 10:24:53.882478] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.573 [2024-04-17 10:24:53.882483] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23909e0) 00:28:20.573 [2024-04-17 10:24:53.882491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.573 [2024-04-17 10:24:53.882504] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8b50, cid 3, qid 0 00:28:20.573 [2024-04-17 10:24:53.882581] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.573 [2024-04-17 10:24:53.882589] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.573 [2024-04-17 10:24:53.882593] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.573 [2024-04-17 10:24:53.882598] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8b50) on tqpair=0x23909e0 00:28:20.573 [2024-04-17 10:24:53.882610] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.573 [2024-04-17 10:24:53.882616] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.573 [2024-04-17 10:24:53.882620] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23909e0) 00:28:20.573 [2024-04-17 10:24:53.882629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.573 [2024-04-17 10:24:53.882641] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8b50, cid 3, qid 0 00:28:20.573 [2024-04-17 10:24:53.886664] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.573 [2024-04-17 10:24:53.886673] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.573 [2024-04-17 10:24:53.886677] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.573 [2024-04-17 10:24:53.886682] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8b50) on tqpair=0x23909e0 00:28:20.573 [2024-04-17 10:24:53.886696] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:20.573 [2024-04-17 10:24:53.886702] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:20.573 [2024-04-17 10:24:53.886706] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23909e0) 00:28:20.573 [2024-04-17 10:24:53.886715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.573 [2024-04-17 10:24:53.886729] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23f8b50, cid 3, qid 0 00:28:20.573 [2024-04-17 10:24:53.886937] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:20.573 [2024-04-17 10:24:53.886945] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:20.573 [2024-04-17 10:24:53.886949] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:20.573 [2024-04-17 10:24:53.886954] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23f8b50) on tqpair=0x23909e0 00:28:20.573 [2024-04-17 10:24:53.886965] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:28:20.832 0 Kelvin (-273 Celsius) 00:28:20.832 Available Spare: 0% 00:28:20.832 Available Spare Threshold: 0% 00:28:20.832 Life Percentage Used: 0% 00:28:20.832 Data Units Read: 0 00:28:20.832 Data Units Written: 0 00:28:20.832 Host Read Commands: 0 00:28:20.832 Host Write Commands: 0 00:28:20.832 Controller Busy Time: 0 minutes 00:28:20.832 Power Cycles: 0 00:28:20.832 Power On Hours: 0 hours 00:28:20.832 Unsafe Shutdowns: 0 00:28:20.832 Unrecoverable Media Errors: 0 00:28:20.832 Lifetime Error Log Entries: 0 00:28:20.832 Warning Temperature Time: 0 minutes 00:28:20.832 Critical Temperature Time: 0 minutes 00:28:20.832 00:28:20.832 Number of Queues 00:28:20.832 ================ 00:28:20.832 Number of I/O Submission Queues: 127 00:28:20.832 Number of I/O Completion Queues: 127 00:28:20.832 00:28:20.832 Active Namespaces 00:28:20.832 ================= 00:28:20.832 Namespace ID:1 00:28:20.832 Error Recovery Timeout: Unlimited 00:28:20.832 Command Set Identifier: NVM (00h) 00:28:20.832 Deallocate: Supported 00:28:20.832 Deallocated/Unwritten Error: Not Supported 00:28:20.832 Deallocated Read Value: Unknown 00:28:20.832 Deallocate in Write Zeroes: Not Supported 00:28:20.832 Deallocated Guard Field: 0xFFFF 00:28:20.832 Flush: Supported 00:28:20.832 Reservation: Supported 00:28:20.832 Namespace Sharing Capabilities: Multiple Controllers 00:28:20.832 Size (in LBAs): 131072 (0GiB) 00:28:20.832 Capacity (in LBAs): 131072 (0GiB) 00:28:20.832 Utilization (in LBAs): 131072 (0GiB) 00:28:20.832 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:20.832 EUI64: ABCDEF0123456789 00:28:20.832 UUID: 10e93e93-d052-4bf2-894e-bec0648c97e7 00:28:20.832 Thin Provisioning: Not Supported 00:28:20.832 Per-NS Atomic Units: Yes 00:28:20.832 Atomic Boundary Size (Normal): 0 00:28:20.832 Atomic Boundary Size (PFail): 0 00:28:20.832 Atomic Boundary Offset: 0 00:28:20.832 Maximum Single Source Range Length: 65535 00:28:20.833 Maximum Copy Length: 65535 00:28:20.833 Maximum Source Range Count: 1 00:28:20.833 NGUID/EUI64 Never Reused: No 00:28:20.833 Namespace Write Protected: No 00:28:20.833 Number of LBA Formats: 1 00:28:20.833 Current LBA Format: LBA Format #00 00:28:20.833 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:20.833 00:28:20.833 10:24:53 -- host/identify.sh@51 -- # sync 00:28:20.833 10:24:53 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:20.833 10:24:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.833 10:24:53 -- common/autotest_common.sh@10 -- # set +x 00:28:20.833 10:24:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.833 10:24:53 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:20.833 10:24:53 -- host/identify.sh@56 -- # nvmftestfini 00:28:20.833 10:24:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:20.833 10:24:53 -- nvmf/common.sh@116 -- # sync 00:28:20.833 10:24:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:20.833 10:24:53 -- nvmf/common.sh@119 -- # set +e 00:28:20.833 10:24:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:20.833 10:24:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:20.833 rmmod nvme_tcp 00:28:20.833 rmmod nvme_fabrics 00:28:20.833 rmmod nvme_keyring 00:28:20.833 10:24:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:20.833 10:24:53 -- nvmf/common.sh@123 -- # set -e 00:28:20.833 10:24:53 -- nvmf/common.sh@124 -- # return 0 00:28:20.833 10:24:53 -- nvmf/common.sh@477 -- # '[' -n 3585432 ']' 00:28:20.833 10:24:53 -- nvmf/common.sh@478 -- # killprocess 3585432 00:28:20.833 10:24:53 -- common/autotest_common.sh@926 -- # '[' -z 3585432 ']' 00:28:20.833 10:24:53 -- common/autotest_common.sh@930 -- # kill -0 3585432 00:28:20.833 10:24:53 -- common/autotest_common.sh@931 -- # uname 00:28:20.833 10:24:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:20.833 10:24:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3585432 00:28:20.833 10:24:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:20.833 10:24:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:20.833 10:24:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3585432' 00:28:20.833 killing process with pid 3585432 00:28:20.833 10:24:54 -- common/autotest_common.sh@945 -- # kill 3585432 00:28:20.833 [2024-04-17 10:24:54.033325] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:28:20.833 10:24:54 -- common/autotest_common.sh@950 -- # wait 3585432 00:28:21.097 10:24:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:21.097 10:24:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:21.097 10:24:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:21.097 10:24:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:21.097 10:24:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:21.097 10:24:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.097 10:24:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:21.097 10:24:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.635 10:24:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:23.635 00:28:23.635 real 0m9.844s 00:28:23.635 user 0m8.373s 00:28:23.635 sys 0m4.741s 00:28:23.635 10:24:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:23.635 10:24:56 -- common/autotest_common.sh@10 -- # set +x 00:28:23.635 ************************************ 00:28:23.635 END TEST nvmf_identify 00:28:23.635 ************************************ 00:28:23.635 10:24:56 -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:23.635 10:24:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:23.635 10:24:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:23.635 10:24:56 -- common/autotest_common.sh@10 -- # set +x 00:28:23.635 ************************************ 00:28:23.635 START TEST nvmf_perf 00:28:23.635 ************************************ 00:28:23.635 10:24:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:23.635 * Looking for test storage... 00:28:23.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:23.635 10:24:56 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:23.635 10:24:56 -- nvmf/common.sh@7 -- # uname -s 00:28:23.635 10:24:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:23.635 10:24:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:23.635 10:24:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:23.635 10:24:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:23.635 10:24:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:23.635 10:24:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:23.635 10:24:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:23.635 10:24:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:23.635 10:24:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:23.635 10:24:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:23.635 10:24:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:28:23.635 10:24:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:28:23.635 10:24:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:23.635 10:24:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:23.635 10:24:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:23.635 10:24:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:23.635 10:24:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:23.635 10:24:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:23.635 10:24:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:23.635 10:24:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.635 10:24:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.635 10:24:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.635 10:24:56 -- paths/export.sh@5 -- # export PATH 00:28:23.636 10:24:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.636 10:24:56 -- nvmf/common.sh@46 -- # : 0 00:28:23.636 10:24:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:23.636 10:24:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:23.636 10:24:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:23.636 10:24:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:23.636 10:24:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:23.636 10:24:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:23.636 10:24:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:23.636 10:24:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:23.636 10:24:56 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:23.636 10:24:56 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:23.636 10:24:56 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:23.636 10:24:56 -- host/perf.sh@17 -- # nvmftestinit 00:28:23.636 10:24:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:23.636 10:24:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:23.636 10:24:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:23.636 10:24:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:23.636 10:24:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:23.636 10:24:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.636 10:24:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:23.636 10:24:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.636 10:24:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:23.636 10:24:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:23.636 10:24:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:23.636 10:24:56 -- common/autotest_common.sh@10 -- # set +x 00:28:28.912 10:25:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:28.912 10:25:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:28.912 10:25:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:28.912 10:25:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:28.912 10:25:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:28.912 10:25:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:28.912 10:25:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:28.912 10:25:01 -- nvmf/common.sh@294 -- # net_devs=() 00:28:28.912 10:25:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:28.912 10:25:01 -- nvmf/common.sh@295 -- # e810=() 00:28:28.912 10:25:01 -- nvmf/common.sh@295 -- # local -ga e810 00:28:28.912 10:25:01 -- nvmf/common.sh@296 -- # x722=() 00:28:28.912 10:25:01 -- nvmf/common.sh@296 -- # local -ga x722 00:28:28.912 10:25:01 -- nvmf/common.sh@297 -- # mlx=() 00:28:28.912 10:25:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:28.912 10:25:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:28.912 10:25:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:28.912 10:25:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:28.912 10:25:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:28.912 10:25:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:28.912 10:25:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:28.912 10:25:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:28.912 10:25:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:28.912 10:25:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:28.912 10:25:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:28.912 10:25:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:28.912 10:25:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:28.912 10:25:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:28.913 10:25:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:28.913 10:25:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:28.913 10:25:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:28.913 10:25:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:28.913 10:25:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:28.913 10:25:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:28.913 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:28.913 10:25:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:28.913 10:25:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:28.913 10:25:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.913 10:25:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.913 10:25:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:28.913 10:25:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:28.913 10:25:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:28.913 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:28.913 10:25:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:28.913 10:25:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:28.913 10:25:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.913 10:25:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.913 10:25:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:28.913 10:25:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:28.913 10:25:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:28.913 10:25:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:28.913 10:25:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:28.913 10:25:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.913 10:25:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:28.913 10:25:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.913 10:25:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:28.913 Found net devices under 0000:af:00.0: cvl_0_0 00:28:28.913 10:25:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.913 10:25:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:28.913 10:25:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.913 10:25:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:28.913 10:25:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.913 10:25:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:28.913 Found net devices under 0000:af:00.1: cvl_0_1 00:28:28.913 10:25:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.913 10:25:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:28.913 10:25:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:28.913 10:25:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:28.913 10:25:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:28.913 10:25:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:28.913 10:25:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:28.913 10:25:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:28.913 10:25:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:28.913 10:25:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:28.913 10:25:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:28.913 10:25:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:28.913 10:25:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:28.913 10:25:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:28.913 10:25:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:28.913 10:25:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:28.913 10:25:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:28.913 10:25:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:28.913 10:25:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:28.913 10:25:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:28.913 10:25:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:28.913 10:25:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:28.913 10:25:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:28.913 10:25:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:28.913 10:25:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:28.913 10:25:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:28.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:28.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:28:28.913 00:28:28.913 --- 10.0.0.2 ping statistics --- 00:28:28.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.913 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:28:28.913 10:25:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:28.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:28.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:28:28.913 00:28:28.913 --- 10.0.0.1 ping statistics --- 00:28:28.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.913 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:28:28.913 10:25:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:28.913 10:25:02 -- nvmf/common.sh@410 -- # return 0 00:28:28.913 10:25:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:28.913 10:25:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:28.913 10:25:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:28.913 10:25:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:28.913 10:25:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:28.913 10:25:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:28.913 10:25:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:28.913 10:25:02 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:28.913 10:25:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:28.913 10:25:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:28.913 10:25:02 -- common/autotest_common.sh@10 -- # set +x 00:28:28.913 10:25:02 -- nvmf/common.sh@469 -- # nvmfpid=3589356 00:28:28.913 10:25:02 -- nvmf/common.sh@470 -- # waitforlisten 3589356 00:28:28.913 10:25:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:28.913 10:25:02 -- common/autotest_common.sh@819 -- # '[' -z 3589356 ']' 00:28:28.913 10:25:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.913 10:25:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:28.913 10:25:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.913 10:25:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:28.913 10:25:02 -- common/autotest_common.sh@10 -- # set +x 00:28:28.913 [2024-04-17 10:25:02.213437] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:28.913 [2024-04-17 10:25:02.213493] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:29.173 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.173 [2024-04-17 10:25:02.300528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:29.173 [2024-04-17 10:25:02.387218] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:29.173 [2024-04-17 10:25:02.387361] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:29.173 [2024-04-17 10:25:02.387373] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:29.173 [2024-04-17 10:25:02.387382] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:29.173 [2024-04-17 10:25:02.387450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.173 [2024-04-17 10:25:02.387551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:29.173 [2024-04-17 10:25:02.387689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.173 [2024-04-17 10:25:02.387690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:30.111 10:25:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:30.111 10:25:03 -- common/autotest_common.sh@852 -- # return 0 00:28:30.111 10:25:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:30.111 10:25:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:30.111 10:25:03 -- common/autotest_common.sh@10 -- # set +x 00:28:30.111 10:25:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:30.111 10:25:03 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:30.111 10:25:03 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:33.420 10:25:06 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:33.420 10:25:06 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:33.420 10:25:06 -- host/perf.sh@30 -- # local_nvme_trid=0000:86:00.0 00:28:33.420 10:25:06 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:33.420 10:25:06 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:33.420 10:25:06 -- host/perf.sh@33 -- # '[' -n 0000:86:00.0 ']' 00:28:33.420 10:25:06 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:33.420 10:25:06 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:33.420 10:25:06 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:33.680 [2024-04-17 10:25:06.940578] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.680 10:25:06 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:33.940 10:25:07 -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:33.940 10:25:07 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:34.199 10:25:07 -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:34.199 10:25:07 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:34.458 10:25:07 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:34.718 [2024-04-17 10:25:07.926876] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:34.718 10:25:07 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:34.977 10:25:08 -- host/perf.sh@52 -- # '[' -n 0000:86:00.0 ']' 00:28:34.977 10:25:08 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:28:34.977 10:25:08 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:34.977 10:25:08 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:28:36.357 Initializing NVMe Controllers 00:28:36.357 Attached to NVMe Controller at 0000:86:00.0 [8086:0a54] 00:28:36.357 Associating PCIE (0000:86:00.0) NSID 1 with lcore 0 00:28:36.357 Initialization complete. Launching workers. 00:28:36.357 ======================================================== 00:28:36.357 Latency(us) 00:28:36.357 Device Information : IOPS MiB/s Average min max 00:28:36.357 PCIE (0000:86:00.0) NSID 1 from core 0: 70484.94 275.33 453.31 27.99 4357.49 00:28:36.357 ======================================================== 00:28:36.357 Total : 70484.94 275.33 453.31 27.99 4357.49 00:28:36.357 00:28:36.357 10:25:09 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:36.357 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.733 Initializing NVMe Controllers 00:28:37.733 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:37.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:37.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:37.733 Initialization complete. Launching workers. 00:28:37.733 ======================================================== 00:28:37.733 Latency(us) 00:28:37.733 Device Information : IOPS MiB/s Average min max 00:28:37.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 90.95 0.36 11257.75 172.96 44999.94 00:28:37.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 73.96 0.29 13951.64 7957.90 47885.59 00:28:37.733 ======================================================== 00:28:37.733 Total : 164.91 0.64 12465.92 172.96 47885.59 00:28:37.733 00:28:37.733 10:25:10 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:37.733 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.108 Initializing NVMe Controllers 00:28:39.108 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:39.108 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:39.108 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:39.108 Initialization complete. Launching workers. 00:28:39.108 ======================================================== 00:28:39.108 Latency(us) 00:28:39.108 Device Information : IOPS MiB/s Average min max 00:28:39.108 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7872.20 30.75 4066.76 602.38 8213.52 00:28:39.108 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3862.91 15.09 8306.43 5374.41 16552.55 00:28:39.108 ======================================================== 00:28:39.108 Total : 11735.11 45.84 5462.35 602.38 16552.55 00:28:39.108 00:28:39.108 10:25:12 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:39.108 10:25:12 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:39.108 10:25:12 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:39.108 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.643 Initializing NVMe Controllers 00:28:41.643 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:41.643 Controller IO queue size 128, less than required. 00:28:41.643 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:41.643 Controller IO queue size 128, less than required. 00:28:41.643 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:41.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:41.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:41.643 Initialization complete. Launching workers. 00:28:41.643 ======================================================== 00:28:41.643 Latency(us) 00:28:41.643 Device Information : IOPS MiB/s Average min max 00:28:41.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1176.04 294.01 111090.26 78731.57 149685.18 00:28:41.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 604.00 151.00 224661.62 81528.04 345601.12 00:28:41.643 ======================================================== 00:28:41.643 Total : 1780.04 445.01 149627.38 78731.57 345601.12 00:28:41.643 00:28:41.643 10:25:14 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:41.643 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.643 No valid NVMe controllers or AIO or URING devices found 00:28:41.643 Initializing NVMe Controllers 00:28:41.643 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:41.643 Controller IO queue size 128, less than required. 00:28:41.643 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:41.643 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:41.643 Controller IO queue size 128, less than required. 00:28:41.643 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:41.643 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:41.643 WARNING: Some requested NVMe devices were skipped 00:28:41.643 10:25:14 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:41.643 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.240 Initializing NVMe Controllers 00:28:44.240 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:44.240 Controller IO queue size 128, less than required. 00:28:44.240 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:44.240 Controller IO queue size 128, less than required. 00:28:44.240 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:44.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:44.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:44.240 Initialization complete. Launching workers. 00:28:44.240 00:28:44.240 ==================== 00:28:44.240 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:44.240 TCP transport: 00:28:44.240 polls: 19111 00:28:44.240 idle_polls: 8787 00:28:44.240 sock_completions: 10324 00:28:44.240 nvme_completions: 4609 00:28:44.240 submitted_requests: 7187 00:28:44.240 queued_requests: 1 00:28:44.240 00:28:44.240 ==================== 00:28:44.240 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:44.240 TCP transport: 00:28:44.240 polls: 19303 00:28:44.240 idle_polls: 8499 00:28:44.240 sock_completions: 10804 00:28:44.240 nvme_completions: 4604 00:28:44.240 submitted_requests: 7135 00:28:44.240 queued_requests: 1 00:28:44.240 ======================================================== 00:28:44.240 Latency(us) 00:28:44.240 Device Information : IOPS MiB/s Average min max 00:28:44.240 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1213.51 303.38 108749.69 57274.72 183723.07 00:28:44.240 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1212.02 303.00 107093.10 48691.34 148878.19 00:28:44.240 ======================================================== 00:28:44.240 Total : 2425.53 606.38 107921.91 48691.34 183723.07 00:28:44.240 00:28:44.240 10:25:17 -- host/perf.sh@66 -- # sync 00:28:44.240 10:25:17 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:44.499 10:25:17 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:44.499 10:25:17 -- host/perf.sh@71 -- # '[' -n 0000:86:00.0 ']' 00:28:44.499 10:25:17 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:47.787 10:25:21 -- host/perf.sh@72 -- # ls_guid=585003a5-2bc6-481f-8785-6498b657eb1b 00:28:47.787 10:25:21 -- host/perf.sh@73 -- # get_lvs_free_mb 585003a5-2bc6-481f-8785-6498b657eb1b 00:28:47.787 10:25:21 -- common/autotest_common.sh@1343 -- # local lvs_uuid=585003a5-2bc6-481f-8785-6498b657eb1b 00:28:47.787 10:25:21 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:47.787 10:25:21 -- common/autotest_common.sh@1345 -- # local fc 00:28:47.787 10:25:21 -- common/autotest_common.sh@1346 -- # local cs 00:28:47.787 10:25:21 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:48.045 10:25:21 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:48.045 { 00:28:48.045 "uuid": "585003a5-2bc6-481f-8785-6498b657eb1b", 00:28:48.045 "name": "lvs_0", 00:28:48.045 "base_bdev": "Nvme0n1", 00:28:48.045 "total_data_clusters": 238234, 00:28:48.045 "free_clusters": 238234, 00:28:48.045 "block_size": 512, 00:28:48.045 "cluster_size": 4194304 00:28:48.045 } 00:28:48.045 ]' 00:28:48.045 10:25:21 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="585003a5-2bc6-481f-8785-6498b657eb1b") .free_clusters' 00:28:48.045 10:25:21 -- common/autotest_common.sh@1348 -- # fc=238234 00:28:48.045 10:25:21 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="585003a5-2bc6-481f-8785-6498b657eb1b") .cluster_size' 00:28:48.304 10:25:21 -- common/autotest_common.sh@1349 -- # cs=4194304 00:28:48.304 10:25:21 -- common/autotest_common.sh@1352 -- # free_mb=952936 00:28:48.304 10:25:21 -- common/autotest_common.sh@1353 -- # echo 952936 00:28:48.304 952936 00:28:48.304 10:25:21 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:48.304 10:25:21 -- host/perf.sh@78 -- # free_mb=20480 00:28:48.304 10:25:21 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 585003a5-2bc6-481f-8785-6498b657eb1b lbd_0 20480 00:28:48.563 10:25:21 -- host/perf.sh@80 -- # lb_guid=dabaf50c-ae9b-44e7-9a57-22620bc1c7ad 00:28:48.563 10:25:21 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore dabaf50c-ae9b-44e7-9a57-22620bc1c7ad lvs_n_0 00:28:49.500 10:25:22 -- host/perf.sh@83 -- # ls_nested_guid=10aba884-ffab-47a6-96a2-cf4b643059b1 00:28:49.500 10:25:22 -- host/perf.sh@84 -- # get_lvs_free_mb 10aba884-ffab-47a6-96a2-cf4b643059b1 00:28:49.500 10:25:22 -- common/autotest_common.sh@1343 -- # local lvs_uuid=10aba884-ffab-47a6-96a2-cf4b643059b1 00:28:49.500 10:25:22 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:49.500 10:25:22 -- common/autotest_common.sh@1345 -- # local fc 00:28:49.500 10:25:22 -- common/autotest_common.sh@1346 -- # local cs 00:28:49.500 10:25:22 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:49.758 10:25:22 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:49.758 { 00:28:49.758 "uuid": "585003a5-2bc6-481f-8785-6498b657eb1b", 00:28:49.758 "name": "lvs_0", 00:28:49.758 "base_bdev": "Nvme0n1", 00:28:49.758 "total_data_clusters": 238234, 00:28:49.758 "free_clusters": 233114, 00:28:49.758 "block_size": 512, 00:28:49.758 "cluster_size": 4194304 00:28:49.758 }, 00:28:49.758 { 00:28:49.758 "uuid": "10aba884-ffab-47a6-96a2-cf4b643059b1", 00:28:49.758 "name": "lvs_n_0", 00:28:49.758 "base_bdev": "dabaf50c-ae9b-44e7-9a57-22620bc1c7ad", 00:28:49.758 "total_data_clusters": 5114, 00:28:49.758 "free_clusters": 5114, 00:28:49.758 "block_size": 512, 00:28:49.758 "cluster_size": 4194304 00:28:49.758 } 00:28:49.758 ]' 00:28:49.758 10:25:22 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="10aba884-ffab-47a6-96a2-cf4b643059b1") .free_clusters' 00:28:49.758 10:25:22 -- common/autotest_common.sh@1348 -- # fc=5114 00:28:49.758 10:25:22 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="10aba884-ffab-47a6-96a2-cf4b643059b1") .cluster_size' 00:28:49.758 10:25:22 -- common/autotest_common.sh@1349 -- # cs=4194304 00:28:49.758 10:25:22 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:28:49.758 10:25:22 -- common/autotest_common.sh@1353 -- # echo 20456 00:28:49.758 20456 00:28:49.758 10:25:22 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:49.758 10:25:22 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 10aba884-ffab-47a6-96a2-cf4b643059b1 lbd_nest_0 20456 00:28:50.016 10:25:23 -- host/perf.sh@88 -- # lb_nested_guid=89dd93b0-97d8-448d-8cca-9edfe82807b5 00:28:50.016 10:25:23 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:50.274 10:25:23 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:50.274 10:25:23 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 89dd93b0-97d8-448d-8cca-9edfe82807b5 00:28:50.532 10:25:23 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:50.791 10:25:23 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:50.791 10:25:23 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:50.791 10:25:23 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:50.791 10:25:23 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:50.791 10:25:23 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:50.791 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.999 Initializing NVMe Controllers 00:29:02.999 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:02.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:02.999 Initialization complete. Launching workers. 00:29:02.999 ======================================================== 00:29:02.999 Latency(us) 00:29:02.999 Device Information : IOPS MiB/s Average min max 00:29:02.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.50 0.02 21119.33 203.62 47480.60 00:29:02.999 ======================================================== 00:29:02.999 Total : 47.50 0.02 21119.33 203.62 47480.60 00:29:02.999 00:29:02.999 10:25:34 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:02.999 10:25:34 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:02.999 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.975 Initializing NVMe Controllers 00:29:12.975 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:12.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:12.975 Initialization complete. Launching workers. 00:29:12.975 ======================================================== 00:29:12.975 Latency(us) 00:29:12.975 Device Information : IOPS MiB/s Average min max 00:29:12.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 70.00 8.75 14296.07 5041.66 50821.85 00:29:12.975 ======================================================== 00:29:12.975 Total : 70.00 8.75 14296.07 5041.66 50821.85 00:29:12.975 00:29:12.975 10:25:44 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:12.975 10:25:44 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:12.975 10:25:44 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:12.975 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.952 Initializing NVMe Controllers 00:29:22.952 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:22.952 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:22.952 Initialization complete. Launching workers. 00:29:22.952 ======================================================== 00:29:22.952 Latency(us) 00:29:22.952 Device Information : IOPS MiB/s Average min max 00:29:22.952 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6958.10 3.40 4599.55 315.92 12081.91 00:29:22.952 ======================================================== 00:29:22.952 Total : 6958.10 3.40 4599.55 315.92 12081.91 00:29:22.952 00:29:22.952 10:25:54 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:22.952 10:25:54 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:22.952 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.933 Initializing NVMe Controllers 00:29:32.933 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:32.933 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:32.933 Initialization complete. Launching workers. 00:29:32.933 ======================================================== 00:29:32.933 Latency(us) 00:29:32.933 Device Information : IOPS MiB/s Average min max 00:29:32.933 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2394.56 299.32 13364.16 948.60 30985.33 00:29:32.933 ======================================================== 00:29:32.933 Total : 2394.56 299.32 13364.16 948.60 30985.33 00:29:32.933 00:29:32.934 10:26:05 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:32.934 10:26:05 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:32.934 10:26:05 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:32.934 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.933 Initializing NVMe Controllers 00:29:42.933 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:42.933 Controller IO queue size 128, less than required. 00:29:42.933 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:42.933 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:42.933 Initialization complete. Launching workers. 00:29:42.933 ======================================================== 00:29:42.933 Latency(us) 00:29:42.933 Device Information : IOPS MiB/s Average min max 00:29:42.933 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10179.42 4.97 12583.94 1933.56 29278.16 00:29:42.933 ======================================================== 00:29:42.933 Total : 10179.42 4.97 12583.94 1933.56 29278.16 00:29:42.933 00:29:42.933 10:26:15 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:42.933 10:26:15 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:42.933 EAL: No free 2048 kB hugepages reported on node 1 00:29:55.171 Initializing NVMe Controllers 00:29:55.171 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:55.171 Controller IO queue size 128, less than required. 00:29:55.171 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:55.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:55.171 Initialization complete. Launching workers. 00:29:55.171 ======================================================== 00:29:55.171 Latency(us) 00:29:55.171 Device Information : IOPS MiB/s Average min max 00:29:55.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1196.56 149.57 107401.41 24129.15 247358.24 00:29:55.171 ======================================================== 00:29:55.171 Total : 1196.56 149.57 107401.41 24129.15 247358.24 00:29:55.171 00:29:55.171 10:26:26 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:55.171 10:26:26 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 89dd93b0-97d8-448d-8cca-9edfe82807b5 00:29:55.171 10:26:27 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:55.171 10:26:27 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dabaf50c-ae9b-44e7-9a57-22620bc1c7ad 00:29:55.171 10:26:27 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:55.171 10:26:28 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:55.171 10:26:28 -- host/perf.sh@114 -- # nvmftestfini 00:29:55.171 10:26:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:55.171 10:26:28 -- nvmf/common.sh@116 -- # sync 00:29:55.172 10:26:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:55.172 10:26:28 -- nvmf/common.sh@119 -- # set +e 00:29:55.172 10:26:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:55.172 10:26:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:55.172 rmmod nvme_tcp 00:29:55.172 rmmod nvme_fabrics 00:29:55.172 rmmod nvme_keyring 00:29:55.172 10:26:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:55.172 10:26:28 -- nvmf/common.sh@123 -- # set -e 00:29:55.172 10:26:28 -- nvmf/common.sh@124 -- # return 0 00:29:55.172 10:26:28 -- nvmf/common.sh@477 -- # '[' -n 3589356 ']' 00:29:55.172 10:26:28 -- nvmf/common.sh@478 -- # killprocess 3589356 00:29:55.172 10:26:28 -- common/autotest_common.sh@926 -- # '[' -z 3589356 ']' 00:29:55.172 10:26:28 -- common/autotest_common.sh@930 -- # kill -0 3589356 00:29:55.172 10:26:28 -- common/autotest_common.sh@931 -- # uname 00:29:55.172 10:26:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:55.172 10:26:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3589356 00:29:55.172 10:26:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:55.172 10:26:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:55.172 10:26:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3589356' 00:29:55.172 killing process with pid 3589356 00:29:55.172 10:26:28 -- common/autotest_common.sh@945 -- # kill 3589356 00:29:55.172 10:26:28 -- common/autotest_common.sh@950 -- # wait 3589356 00:29:56.585 10:26:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:56.585 10:26:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:56.585 10:26:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:56.585 10:26:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:56.585 10:26:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:56.585 10:26:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.585 10:26:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:56.585 10:26:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.121 10:26:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:59.121 00:29:59.121 real 1m35.486s 00:29:59.121 user 5m45.404s 00:29:59.121 sys 0m14.966s 00:29:59.121 10:26:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:59.121 10:26:31 -- common/autotest_common.sh@10 -- # set +x 00:29:59.121 ************************************ 00:29:59.121 END TEST nvmf_perf 00:29:59.121 ************************************ 00:29:59.121 10:26:31 -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:59.121 10:26:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:59.121 10:26:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:59.121 10:26:31 -- common/autotest_common.sh@10 -- # set +x 00:29:59.121 ************************************ 00:29:59.121 START TEST nvmf_fio_host 00:29:59.121 ************************************ 00:29:59.121 10:26:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:59.121 * Looking for test storage... 00:29:59.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:59.121 10:26:32 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:59.121 10:26:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:59.121 10:26:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.121 10:26:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.121 10:26:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.121 10:26:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.121 10:26:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.121 10:26:32 -- paths/export.sh@5 -- # export PATH 00:29:59.121 10:26:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.121 10:26:32 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:59.121 10:26:32 -- nvmf/common.sh@7 -- # uname -s 00:29:59.121 10:26:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:59.121 10:26:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:59.121 10:26:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:59.121 10:26:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:59.121 10:26:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:59.121 10:26:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:59.121 10:26:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:59.121 10:26:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:59.121 10:26:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:59.121 10:26:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:59.121 10:26:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:29:59.121 10:26:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:29:59.121 10:26:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:59.121 10:26:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:59.121 10:26:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:59.121 10:26:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:59.121 10:26:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:59.121 10:26:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.121 10:26:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.121 10:26:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.121 10:26:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.121 10:26:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.121 10:26:32 -- paths/export.sh@5 -- # export PATH 00:29:59.122 10:26:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.122 10:26:32 -- nvmf/common.sh@46 -- # : 0 00:29:59.122 10:26:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:59.122 10:26:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:59.122 10:26:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:59.122 10:26:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:59.122 10:26:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:59.122 10:26:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:59.122 10:26:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:59.122 10:26:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:59.122 10:26:32 -- host/fio.sh@12 -- # nvmftestinit 00:29:59.122 10:26:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:59.122 10:26:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:59.122 10:26:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:59.122 10:26:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:59.122 10:26:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:59.122 10:26:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.122 10:26:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:59.122 10:26:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.122 10:26:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:59.122 10:26:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:59.122 10:26:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:59.122 10:26:32 -- common/autotest_common.sh@10 -- # set +x 00:30:04.392 10:26:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:04.392 10:26:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:04.392 10:26:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:04.392 10:26:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:04.392 10:26:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:04.392 10:26:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:04.392 10:26:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:04.392 10:26:37 -- nvmf/common.sh@294 -- # net_devs=() 00:30:04.392 10:26:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:04.392 10:26:37 -- nvmf/common.sh@295 -- # e810=() 00:30:04.392 10:26:37 -- nvmf/common.sh@295 -- # local -ga e810 00:30:04.392 10:26:37 -- nvmf/common.sh@296 -- # x722=() 00:30:04.392 10:26:37 -- nvmf/common.sh@296 -- # local -ga x722 00:30:04.392 10:26:37 -- nvmf/common.sh@297 -- # mlx=() 00:30:04.392 10:26:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:04.392 10:26:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:04.392 10:26:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:04.392 10:26:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:04.392 10:26:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:04.392 10:26:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:04.392 10:26:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:04.392 10:26:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:04.392 10:26:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:04.392 10:26:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:04.392 10:26:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:04.392 10:26:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:04.393 10:26:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:04.393 10:26:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:04.393 10:26:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:04.393 10:26:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:04.393 10:26:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:04.393 10:26:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:04.393 10:26:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:04.393 10:26:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:04.393 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:04.393 10:26:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:04.393 10:26:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:04.393 10:26:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.393 10:26:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.393 10:26:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:04.393 10:26:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:04.393 10:26:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:04.393 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:04.393 10:26:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:04.393 10:26:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:04.393 10:26:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.393 10:26:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.393 10:26:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:04.393 10:26:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:04.393 10:26:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:04.393 10:26:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:04.393 10:26:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:04.393 10:26:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.393 10:26:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:04.393 10:26:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.393 10:26:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:04.393 Found net devices under 0000:af:00.0: cvl_0_0 00:30:04.393 10:26:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.393 10:26:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:04.393 10:26:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.393 10:26:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:04.393 10:26:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.393 10:26:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:04.393 Found net devices under 0000:af:00.1: cvl_0_1 00:30:04.393 10:26:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.393 10:26:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:04.393 10:26:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:04.393 10:26:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:04.393 10:26:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:04.393 10:26:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:04.393 10:26:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:04.393 10:26:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:04.393 10:26:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:04.393 10:26:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:04.393 10:26:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:04.393 10:26:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:04.393 10:26:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:04.393 10:26:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:04.393 10:26:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:04.393 10:26:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:04.393 10:26:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:04.393 10:26:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:04.393 10:26:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:04.393 10:26:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:04.393 10:26:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:04.393 10:26:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:04.393 10:26:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:04.393 10:26:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:04.393 10:26:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:04.393 10:26:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:04.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:04.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:30:04.393 00:30:04.393 --- 10.0.0.2 ping statistics --- 00:30:04.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.393 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:30:04.393 10:26:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:04.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:04.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:30:04.393 00:30:04.393 --- 10.0.0.1 ping statistics --- 00:30:04.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.393 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:30:04.393 10:26:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:04.393 10:26:37 -- nvmf/common.sh@410 -- # return 0 00:30:04.393 10:26:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:04.393 10:26:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:04.393 10:26:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:04.393 10:26:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:04.393 10:26:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:04.393 10:26:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:04.393 10:26:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:04.393 10:26:37 -- host/fio.sh@14 -- # [[ y != y ]] 00:30:04.393 10:26:37 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:30:04.393 10:26:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:04.393 10:26:37 -- common/autotest_common.sh@10 -- # set +x 00:30:04.393 10:26:37 -- host/fio.sh@22 -- # nvmfpid=3608362 00:30:04.393 10:26:37 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:04.393 10:26:37 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:04.393 10:26:37 -- host/fio.sh@26 -- # waitforlisten 3608362 00:30:04.393 10:26:37 -- common/autotest_common.sh@819 -- # '[' -z 3608362 ']' 00:30:04.393 10:26:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.393 10:26:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:04.393 10:26:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:04.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:04.393 10:26:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:04.393 10:26:37 -- common/autotest_common.sh@10 -- # set +x 00:30:04.393 [2024-04-17 10:26:37.628649] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:30:04.393 [2024-04-17 10:26:37.628703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:04.393 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.393 [2024-04-17 10:26:37.718537] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:04.653 [2024-04-17 10:26:37.806368] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:04.653 [2024-04-17 10:26:37.806511] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:04.653 [2024-04-17 10:26:37.806523] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:04.653 [2024-04-17 10:26:37.806531] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:04.653 [2024-04-17 10:26:37.806578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.653 [2024-04-17 10:26:37.806597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:04.653 [2024-04-17 10:26:37.806720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:04.653 [2024-04-17 10:26:37.806722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.615 10:26:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:05.615 10:26:38 -- common/autotest_common.sh@852 -- # return 0 00:30:05.615 10:26:38 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:05.615 10:26:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:05.615 10:26:38 -- common/autotest_common.sh@10 -- # set +x 00:30:05.615 [2024-04-17 10:26:38.573217] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.615 10:26:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:05.615 10:26:38 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:30:05.615 10:26:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:05.615 10:26:38 -- common/autotest_common.sh@10 -- # set +x 00:30:05.615 10:26:38 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:05.615 10:26:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:05.615 10:26:38 -- common/autotest_common.sh@10 -- # set +x 00:30:05.615 Malloc1 00:30:05.615 10:26:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:05.615 10:26:38 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:05.615 10:26:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:05.615 10:26:38 -- common/autotest_common.sh@10 -- # set +x 00:30:05.615 10:26:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:05.615 10:26:38 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:05.615 10:26:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:05.615 10:26:38 -- common/autotest_common.sh@10 -- # set +x 00:30:05.615 10:26:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:05.615 10:26:38 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.615 10:26:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:05.615 10:26:38 -- common/autotest_common.sh@10 -- # set +x 00:30:05.615 [2024-04-17 10:26:38.665356] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.615 10:26:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:05.615 10:26:38 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:05.615 10:26:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:05.615 10:26:38 -- common/autotest_common.sh@10 -- # set +x 00:30:05.615 10:26:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:05.615 10:26:38 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:05.615 10:26:38 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:05.615 10:26:38 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:05.615 10:26:38 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:05.615 10:26:38 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:05.615 10:26:38 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:05.615 10:26:38 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:05.615 10:26:38 -- common/autotest_common.sh@1320 -- # shift 00:30:05.616 10:26:38 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:05.616 10:26:38 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:05.616 10:26:38 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:05.616 10:26:38 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:05.616 10:26:38 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:05.616 10:26:38 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:05.616 10:26:38 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:05.616 10:26:38 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:05.616 10:26:38 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:05.616 10:26:38 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:05.616 10:26:38 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:05.616 10:26:38 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:05.616 10:26:38 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:05.616 10:26:38 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:05.616 10:26:38 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:05.877 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:05.877 fio-3.35 00:30:05.877 Starting 1 thread 00:30:05.877 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.416 00:30:08.416 test: (groupid=0, jobs=1): err= 0: pid=3608785: Wed Apr 17 10:26:41 2024 00:30:08.416 read: IOPS=8267, BW=32.3MiB/s (33.9MB/s)(66.2MiB/2049msec) 00:30:08.416 slat (usec): min=2, max=240, avg= 2.54, stdev= 2.55 00:30:08.416 clat (usec): min=3057, max=55434, avg=8532.69, stdev=2732.96 00:30:08.416 lat (usec): min=3090, max=55436, avg=8535.23, stdev=2732.90 00:30:08.416 clat percentiles (usec): 00:30:08.416 | 1.00th=[ 6783], 5.00th=[ 7373], 10.00th=[ 7570], 20.00th=[ 7898], 00:30:08.416 | 30.00th=[ 8094], 40.00th=[ 8225], 50.00th=[ 8356], 60.00th=[ 8586], 00:30:08.416 | 70.00th=[ 8717], 80.00th=[ 8848], 90.00th=[ 9110], 95.00th=[ 9372], 00:30:08.416 | 99.00th=[ 9765], 99.50th=[ 9896], 99.90th=[53216], 99.95th=[54264], 00:30:08.416 | 99.99th=[55313] 00:30:08.416 bw ( KiB/s): min=32952, max=34176, per=100.00%, avg=33754.00, stdev=552.30, samples=4 00:30:08.416 iops : min= 8238, max= 8544, avg=8438.50, stdev=138.08, samples=4 00:30:08.416 write: IOPS=8266, BW=32.3MiB/s (33.9MB/s)(66.2MiB/2049msec); 0 zone resets 00:30:08.416 slat (usec): min=2, max=223, avg= 2.66, stdev= 1.97 00:30:08.416 clat (usec): min=2417, max=54366, avg=6886.94, stdev=2774.11 00:30:08.416 lat (usec): min=2432, max=54368, avg=6889.60, stdev=2774.07 00:30:08.416 clat percentiles (usec): 00:30:08.416 | 1.00th=[ 5473], 5.00th=[ 5932], 10.00th=[ 6063], 20.00th=[ 6325], 00:30:08.416 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6849], 00:30:08.416 | 70.00th=[ 6980], 80.00th=[ 7111], 90.00th=[ 7373], 95.00th=[ 7570], 00:30:08.416 | 99.00th=[ 7898], 99.50th=[ 8094], 99.90th=[52691], 99.95th=[53216], 00:30:08.416 | 99.99th=[54264] 00:30:08.416 bw ( KiB/s): min=33384, max=33944, per=100.00%, avg=33750.00, stdev=251.83, samples=4 00:30:08.416 iops : min= 8346, max= 8486, avg=8437.50, stdev=62.96, samples=4 00:30:08.416 lat (msec) : 4=0.12%, 10=99.47%, 20=0.04%, 50=0.12%, 100=0.25% 00:30:08.416 cpu : usr=71.97%, sys=24.95%, ctx=67, majf=0, minf=5 00:30:08.416 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:08.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:08.416 issued rwts: total=16940,16939,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.416 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:08.416 00:30:08.416 Run status group 0 (all jobs): 00:30:08.416 READ: bw=32.3MiB/s (33.9MB/s), 32.3MiB/s-32.3MiB/s (33.9MB/s-33.9MB/s), io=66.2MiB (69.4MB), run=2049-2049msec 00:30:08.416 WRITE: bw=32.3MiB/s (33.9MB/s), 32.3MiB/s-32.3MiB/s (33.9MB/s-33.9MB/s), io=66.2MiB (69.4MB), run=2049-2049msec 00:30:08.416 10:26:41 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:08.416 10:26:41 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:08.416 10:26:41 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:08.416 10:26:41 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:08.416 10:26:41 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:08.416 10:26:41 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:08.416 10:26:41 -- common/autotest_common.sh@1320 -- # shift 00:30:08.416 10:26:41 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:08.416 10:26:41 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:08.416 10:26:41 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:08.416 10:26:41 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:08.416 10:26:41 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:08.416 10:26:41 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:08.416 10:26:41 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:08.416 10:26:41 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:08.416 10:26:41 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:08.416 10:26:41 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:08.416 10:26:41 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:08.416 10:26:41 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:08.417 10:26:41 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:08.417 10:26:41 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:08.417 10:26:41 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:08.674 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:08.674 fio-3.35 00:30:08.674 Starting 1 thread 00:30:08.674 EAL: No free 2048 kB hugepages reported on node 1 00:30:11.206 00:30:11.206 test: (groupid=0, jobs=1): err= 0: pid=3609420: Wed Apr 17 10:26:44 2024 00:30:11.206 read: IOPS=9715, BW=152MiB/s (159MB/s)(305MiB/2009msec) 00:30:11.206 slat (nsec): min=2322, max=81514, avg=2619.35, stdev=1372.50 00:30:11.206 clat (usec): min=2555, max=53365, avg=8082.49, stdev=5102.91 00:30:11.206 lat (usec): min=2557, max=53367, avg=8085.11, stdev=5102.96 00:30:11.206 clat percentiles (usec): 00:30:11.206 | 1.00th=[ 3589], 5.00th=[ 4293], 10.00th=[ 4817], 20.00th=[ 5604], 00:30:11.206 | 30.00th=[ 6259], 40.00th=[ 6915], 50.00th=[ 7504], 60.00th=[ 8094], 00:30:11.206 | 70.00th=[ 8586], 80.00th=[ 9241], 90.00th=[10683], 95.00th=[11994], 00:30:11.206 | 99.00th=[45351], 99.50th=[49021], 99.90th=[52691], 99.95th=[52691], 00:30:11.206 | 99.99th=[53216] 00:30:11.206 bw ( KiB/s): min=66464, max=93728, per=49.68%, avg=77232.00, stdev=12280.26, samples=4 00:30:11.206 iops : min= 4154, max= 5858, avg=4827.00, stdev=767.52, samples=4 00:30:11.206 write: IOPS=5865, BW=91.7MiB/s (96.1MB/s)(158MiB/1723msec); 0 zone resets 00:30:11.206 slat (usec): min=26, max=375, avg=29.18, stdev= 7.69 00:30:11.206 clat (usec): min=2895, max=19872, avg=9036.92, stdev=2466.21 00:30:11.206 lat (usec): min=2922, max=19900, avg=9066.10, stdev=2466.93 00:30:11.206 clat percentiles (usec): 00:30:11.206 | 1.00th=[ 5211], 5.00th=[ 5866], 10.00th=[ 6259], 20.00th=[ 6849], 00:30:11.206 | 30.00th=[ 7308], 40.00th=[ 7832], 50.00th=[ 8586], 60.00th=[ 9241], 00:30:11.206 | 70.00th=[10290], 80.00th=[11207], 90.00th=[12649], 95.00th=[13698], 00:30:11.206 | 99.00th=[15270], 99.50th=[15926], 99.90th=[16909], 99.95th=[17171], 00:30:11.206 | 99.99th=[18220] 00:30:11.206 bw ( KiB/s): min=69632, max=97760, per=85.88%, avg=80600.00, stdev=13133.92, samples=4 00:30:11.206 iops : min= 4352, max= 6110, avg=5037.50, stdev=820.87, samples=4 00:30:11.206 lat (msec) : 4=2.02%, 10=77.64%, 20=19.48%, 50=0.63%, 100=0.22% 00:30:11.206 cpu : usr=81.57%, sys=16.98%, ctx=71, majf=0, minf=2 00:30:11.206 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:30:11.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:11.206 issued rwts: total=19519,10107,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.206 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.206 00:30:11.206 Run status group 0 (all jobs): 00:30:11.206 READ: bw=152MiB/s (159MB/s), 152MiB/s-152MiB/s (159MB/s-159MB/s), io=305MiB (320MB), run=2009-2009msec 00:30:11.206 WRITE: bw=91.7MiB/s (96.1MB/s), 91.7MiB/s-91.7MiB/s (96.1MB/s-96.1MB/s), io=158MiB (166MB), run=1723-1723msec 00:30:11.206 10:26:44 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:11.206 10:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.206 10:26:44 -- common/autotest_common.sh@10 -- # set +x 00:30:11.206 10:26:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.206 10:26:44 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:30:11.207 10:26:44 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:30:11.207 10:26:44 -- host/fio.sh@49 -- # get_nvme_bdfs 00:30:11.207 10:26:44 -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:11.207 10:26:44 -- common/autotest_common.sh@1498 -- # local bdfs 00:30:11.207 10:26:44 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:11.207 10:26:44 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:11.207 10:26:44 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:11.207 10:26:44 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:11.207 10:26:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:86:00.0 00:30:11.207 10:26:44 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:86:00.0 -i 10.0.0.2 00:30:11.207 10:26:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.207 10:26:44 -- common/autotest_common.sh@10 -- # set +x 00:30:13.736 Nvme0n1 00:30:13.736 10:26:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:13.737 10:26:47 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:13.737 10:26:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:13.737 10:26:47 -- common/autotest_common.sh@10 -- # set +x 00:30:17.019 10:26:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:17.019 10:26:49 -- host/fio.sh@51 -- # ls_guid=1a5039d7-fc43-4af0-9bf2-35d2a5831a34 00:30:17.019 10:26:49 -- host/fio.sh@52 -- # get_lvs_free_mb 1a5039d7-fc43-4af0-9bf2-35d2a5831a34 00:30:17.019 10:26:49 -- common/autotest_common.sh@1343 -- # local lvs_uuid=1a5039d7-fc43-4af0-9bf2-35d2a5831a34 00:30:17.019 10:26:49 -- common/autotest_common.sh@1344 -- # local lvs_info 00:30:17.019 10:26:49 -- common/autotest_common.sh@1345 -- # local fc 00:30:17.019 10:26:49 -- common/autotest_common.sh@1346 -- # local cs 00:30:17.019 10:26:49 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:30:17.019 10:26:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:17.019 10:26:49 -- common/autotest_common.sh@10 -- # set +x 00:30:17.019 10:26:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:17.019 10:26:49 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:30:17.019 { 00:30:17.019 "uuid": "1a5039d7-fc43-4af0-9bf2-35d2a5831a34", 00:30:17.019 "name": "lvs_0", 00:30:17.019 "base_bdev": "Nvme0n1", 00:30:17.019 "total_data_clusters": 930, 00:30:17.019 "free_clusters": 930, 00:30:17.019 "block_size": 512, 00:30:17.019 "cluster_size": 1073741824 00:30:17.019 } 00:30:17.019 ]' 00:30:17.019 10:26:49 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="1a5039d7-fc43-4af0-9bf2-35d2a5831a34") .free_clusters' 00:30:17.019 10:26:49 -- common/autotest_common.sh@1348 -- # fc=930 00:30:17.019 10:26:49 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="1a5039d7-fc43-4af0-9bf2-35d2a5831a34") .cluster_size' 00:30:17.019 10:26:49 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:30:17.019 10:26:49 -- common/autotest_common.sh@1352 -- # free_mb=952320 00:30:17.019 10:26:49 -- common/autotest_common.sh@1353 -- # echo 952320 00:30:17.019 952320 00:30:17.019 10:26:49 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:17.019 10:26:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:17.019 10:26:49 -- common/autotest_common.sh@10 -- # set +x 00:30:17.019 c7952262-a956-4e12-a756-7dacf7a8c20a 00:30:17.019 10:26:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:17.019 10:26:49 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:17.019 10:26:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:17.019 10:26:49 -- common/autotest_common.sh@10 -- # set +x 00:30:17.019 10:26:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:17.019 10:26:49 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:17.019 10:26:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:17.019 10:26:49 -- common/autotest_common.sh@10 -- # set +x 00:30:17.019 10:26:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:17.019 10:26:49 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:17.019 10:26:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:17.019 10:26:49 -- common/autotest_common.sh@10 -- # set +x 00:30:17.019 10:26:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:17.019 10:26:49 -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:17.019 10:26:49 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:17.019 10:26:49 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:17.019 10:26:49 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:17.019 10:26:49 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:17.019 10:26:49 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:17.019 10:26:49 -- common/autotest_common.sh@1320 -- # shift 00:30:17.019 10:26:49 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:17.019 10:26:49 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:17.019 10:26:49 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:17.019 10:26:49 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:17.019 10:26:49 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:17.019 10:26:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:17.019 10:26:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:17.020 10:26:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:17.020 10:26:50 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:17.020 10:26:50 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:17.020 10:26:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:17.020 10:26:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:17.020 10:26:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:17.020 10:26:50 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:17.020 10:26:50 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:17.277 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:17.277 fio-3.35 00:30:17.277 Starting 1 thread 00:30:17.277 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.804 00:30:19.804 test: (groupid=0, jobs=1): err= 0: pid=3611017: Wed Apr 17 10:26:52 2024 00:30:19.804 read: IOPS=5710, BW=22.3MiB/s (23.4MB/s)(44.8MiB/2009msec) 00:30:19.804 slat (usec): min=2, max=124, avg= 2.50, stdev= 1.63 00:30:19.804 clat (usec): min=1107, max=171504, avg=12387.66, stdev=11895.69 00:30:19.804 lat (usec): min=1109, max=171529, avg=12390.16, stdev=11895.94 00:30:19.804 clat percentiles (msec): 00:30:19.804 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 11], 00:30:19.804 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:30:19.804 | 70.00th=[ 12], 80.00th=[ 13], 90.00th=[ 13], 95.00th=[ 14], 00:30:19.804 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:30:19.804 | 99.99th=[ 171] 00:30:19.804 bw ( KiB/s): min=16448, max=25072, per=99.90%, avg=22820.00, stdev=4250.41, samples=4 00:30:19.804 iops : min= 4112, max= 6268, avg=5705.00, stdev=1062.60, samples=4 00:30:19.804 write: IOPS=5692, BW=22.2MiB/s (23.3MB/s)(44.7MiB/2009msec); 0 zone resets 00:30:19.804 slat (usec): min=2, max=111, avg= 2.61, stdev= 1.11 00:30:19.804 clat (usec): min=329, max=169540, avg=9974.50, stdev=11186.30 00:30:19.804 lat (usec): min=332, max=169547, avg=9977.11, stdev=11186.59 00:30:19.804 clat percentiles (msec): 00:30:19.804 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 9], 00:30:19.804 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 10], 00:30:19.804 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 11], 95.00th=[ 11], 00:30:19.804 | 99.00th=[ 12], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 169], 00:30:19.804 | 99.99th=[ 169] 00:30:19.804 bw ( KiB/s): min=17416, max=24640, per=99.89%, avg=22746.00, stdev=3554.90, samples=4 00:30:19.804 iops : min= 4354, max= 6160, avg=5686.50, stdev=888.73, samples=4 00:30:19.804 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:30:19.804 lat (msec) : 2=0.03%, 4=0.13%, 10=45.33%, 20=53.94%, 250=0.56% 00:30:19.804 cpu : usr=69.27%, sys=28.54%, ctx=65, majf=0, minf=5 00:30:19.804 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:19.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.804 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:19.804 issued rwts: total=11473,11437,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:19.804 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:19.804 00:30:19.804 Run status group 0 (all jobs): 00:30:19.804 READ: bw=22.3MiB/s (23.4MB/s), 22.3MiB/s-22.3MiB/s (23.4MB/s-23.4MB/s), io=44.8MiB (47.0MB), run=2009-2009msec 00:30:19.804 WRITE: bw=22.2MiB/s (23.3MB/s), 22.2MiB/s-22.2MiB/s (23.3MB/s-23.3MB/s), io=44.7MiB (46.8MB), run=2009-2009msec 00:30:19.804 10:26:52 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:19.804 10:26:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.804 10:26:52 -- common/autotest_common.sh@10 -- # set +x 00:30:19.804 10:26:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:19.804 10:26:52 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:19.804 10:26:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:19.804 10:26:52 -- common/autotest_common.sh@10 -- # set +x 00:30:20.738 10:26:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:20.738 10:26:53 -- host/fio.sh@62 -- # ls_nested_guid=c2859d95-89e3-4e06-a1f3-3894cb5addff 00:30:20.738 10:26:53 -- host/fio.sh@63 -- # get_lvs_free_mb c2859d95-89e3-4e06-a1f3-3894cb5addff 00:30:20.738 10:26:53 -- common/autotest_common.sh@1343 -- # local lvs_uuid=c2859d95-89e3-4e06-a1f3-3894cb5addff 00:30:20.738 10:26:53 -- common/autotest_common.sh@1344 -- # local lvs_info 00:30:20.738 10:26:53 -- common/autotest_common.sh@1345 -- # local fc 00:30:20.738 10:26:53 -- common/autotest_common.sh@1346 -- # local cs 00:30:20.738 10:26:53 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:30:20.738 10:26:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:20.738 10:26:53 -- common/autotest_common.sh@10 -- # set +x 00:30:20.738 10:26:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:20.738 10:26:53 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:30:20.738 { 00:30:20.738 "uuid": "1a5039d7-fc43-4af0-9bf2-35d2a5831a34", 00:30:20.738 "name": "lvs_0", 00:30:20.738 "base_bdev": "Nvme0n1", 00:30:20.738 "total_data_clusters": 930, 00:30:20.738 "free_clusters": 0, 00:30:20.738 "block_size": 512, 00:30:20.738 "cluster_size": 1073741824 00:30:20.738 }, 00:30:20.738 { 00:30:20.738 "uuid": "c2859d95-89e3-4e06-a1f3-3894cb5addff", 00:30:20.738 "name": "lvs_n_0", 00:30:20.738 "base_bdev": "c7952262-a956-4e12-a756-7dacf7a8c20a", 00:30:20.738 "total_data_clusters": 237847, 00:30:20.738 "free_clusters": 237847, 00:30:20.738 "block_size": 512, 00:30:20.738 "cluster_size": 4194304 00:30:20.738 } 00:30:20.738 ]' 00:30:20.738 10:26:53 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="c2859d95-89e3-4e06-a1f3-3894cb5addff") .free_clusters' 00:30:20.738 10:26:53 -- common/autotest_common.sh@1348 -- # fc=237847 00:30:20.738 10:26:53 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="c2859d95-89e3-4e06-a1f3-3894cb5addff") .cluster_size' 00:30:20.738 10:26:53 -- common/autotest_common.sh@1349 -- # cs=4194304 00:30:20.738 10:26:53 -- common/autotest_common.sh@1352 -- # free_mb=951388 00:30:20.738 10:26:53 -- common/autotest_common.sh@1353 -- # echo 951388 00:30:20.739 951388 00:30:20.739 10:26:53 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:20.739 10:26:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:20.739 10:26:53 -- common/autotest_common.sh@10 -- # set +x 00:30:21.318 fbfd763a-6388-4570-b45a-ff02f77f4a90 00:30:21.318 10:26:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:21.318 10:26:54 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:21.318 10:26:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:21.318 10:26:54 -- common/autotest_common.sh@10 -- # set +x 00:30:21.318 10:26:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:21.318 10:26:54 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:21.318 10:26:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:21.318 10:26:54 -- common/autotest_common.sh@10 -- # set +x 00:30:21.318 10:26:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:21.318 10:26:54 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:21.318 10:26:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:21.318 10:26:54 -- common/autotest_common.sh@10 -- # set +x 00:30:21.318 10:26:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:21.318 10:26:54 -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:21.318 10:26:54 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:21.318 10:26:54 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:21.318 10:26:54 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:21.318 10:26:54 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:21.318 10:26:54 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:21.318 10:26:54 -- common/autotest_common.sh@1320 -- # shift 00:30:21.318 10:26:54 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:21.318 10:26:54 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:21.318 10:26:54 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:21.318 10:26:54 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:21.318 10:26:54 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:21.318 10:26:54 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:21.318 10:26:54 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:21.318 10:26:54 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:21.318 10:26:54 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:21.318 10:26:54 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:21.318 10:26:54 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:21.318 10:26:54 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:21.318 10:26:54 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:21.318 10:26:54 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:21.319 10:26:54 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:21.581 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:21.581 fio-3.35 00:30:21.581 Starting 1 thread 00:30:21.581 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.110 00:30:24.110 test: (groupid=0, jobs=1): err= 0: pid=3611864: Wed Apr 17 10:26:57 2024 00:30:24.110 read: IOPS=8730, BW=34.1MiB/s (35.8MB/s)(68.4MiB/2006msec) 00:30:24.110 slat (usec): min=2, max=122, avg= 2.48, stdev= 1.29 00:30:24.110 clat (usec): min=2985, max=14195, avg=8089.61, stdev=644.86 00:30:24.110 lat (usec): min=2990, max=14198, avg=8092.09, stdev=644.80 00:30:24.110 clat percentiles (usec): 00:30:24.110 | 1.00th=[ 6652], 5.00th=[ 7046], 10.00th=[ 7308], 20.00th=[ 7570], 00:30:24.110 | 30.00th=[ 7767], 40.00th=[ 7963], 50.00th=[ 8094], 60.00th=[ 8225], 00:30:24.110 | 70.00th=[ 8455], 80.00th=[ 8586], 90.00th=[ 8848], 95.00th=[ 9110], 00:30:24.110 | 99.00th=[ 9503], 99.50th=[ 9765], 99.90th=[11338], 99.95th=[13042], 00:30:24.110 | 99.99th=[14091] 00:30:24.110 bw ( KiB/s): min=33656, max=35680, per=99.92%, avg=34894.00, stdev=867.14, samples=4 00:30:24.110 iops : min= 8414, max= 8920, avg=8723.50, stdev=216.78, samples=4 00:30:24.110 write: IOPS=8729, BW=34.1MiB/s (35.8MB/s)(68.4MiB/2006msec); 0 zone resets 00:30:24.110 slat (nsec): min=2443, max=116358, avg=2603.00, stdev=932.83 00:30:24.110 clat (usec): min=1606, max=11235, avg=6472.50, stdev=564.24 00:30:24.110 lat (usec): min=1613, max=11237, avg=6475.11, stdev=564.21 00:30:24.110 clat percentiles (usec): 00:30:24.110 | 1.00th=[ 5145], 5.00th=[ 5604], 10.00th=[ 5800], 20.00th=[ 6063], 00:30:24.110 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6587], 00:30:24.110 | 70.00th=[ 6718], 80.00th=[ 6915], 90.00th=[ 7111], 95.00th=[ 7308], 00:30:24.110 | 99.00th=[ 7701], 99.50th=[ 7898], 99.90th=[ 9896], 99.95th=[10290], 00:30:24.110 | 99.99th=[11207] 00:30:24.111 bw ( KiB/s): min=34640, max=35328, per=99.95%, avg=34900.00, stdev=314.99, samples=4 00:30:24.111 iops : min= 8660, max= 8832, avg=8725.00, stdev=78.75, samples=4 00:30:24.111 lat (msec) : 2=0.01%, 4=0.10%, 10=99.73%, 20=0.16% 00:30:24.111 cpu : usr=73.02%, sys=23.74%, ctx=82, majf=0, minf=5 00:30:24.111 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:24.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:24.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:24.111 issued rwts: total=17513,17511,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:24.111 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:24.111 00:30:24.111 Run status group 0 (all jobs): 00:30:24.111 READ: bw=34.1MiB/s (35.8MB/s), 34.1MiB/s-34.1MiB/s (35.8MB/s-35.8MB/s), io=68.4MiB (71.7MB), run=2006-2006msec 00:30:24.111 WRITE: bw=34.1MiB/s (35.8MB/s), 34.1MiB/s-34.1MiB/s (35.8MB/s-35.8MB/s), io=68.4MiB (71.7MB), run=2006-2006msec 00:30:24.111 10:26:57 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:24.111 10:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:24.111 10:26:57 -- common/autotest_common.sh@10 -- # set +x 00:30:24.111 10:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:24.111 10:26:57 -- host/fio.sh@72 -- # sync 00:30:24.111 10:26:57 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:24.111 10:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:24.111 10:26:57 -- common/autotest_common.sh@10 -- # set +x 00:30:28.296 10:27:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:28.296 10:27:00 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:30:28.296 10:27:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:28.296 10:27:00 -- common/autotest_common.sh@10 -- # set +x 00:30:28.296 10:27:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:28.296 10:27:00 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:30:28.296 10:27:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:28.296 10:27:00 -- common/autotest_common.sh@10 -- # set +x 00:30:30.198 10:27:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:30.198 10:27:03 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:30:30.198 10:27:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:30.198 10:27:03 -- common/autotest_common.sh@10 -- # set +x 00:30:30.198 10:27:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:30.198 10:27:03 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:30:30.198 10:27:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:30.198 10:27:03 -- common/autotest_common.sh@10 -- # set +x 00:30:32.101 10:27:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:32.101 10:27:05 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:30:32.101 10:27:05 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:30:32.101 10:27:05 -- host/fio.sh@84 -- # nvmftestfini 00:30:32.101 10:27:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:32.101 10:27:05 -- nvmf/common.sh@116 -- # sync 00:30:32.101 10:27:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:32.101 10:27:05 -- nvmf/common.sh@119 -- # set +e 00:30:32.101 10:27:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:32.101 10:27:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:32.101 rmmod nvme_tcp 00:30:32.101 rmmod nvme_fabrics 00:30:32.101 rmmod nvme_keyring 00:30:32.101 10:27:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:32.101 10:27:05 -- nvmf/common.sh@123 -- # set -e 00:30:32.101 10:27:05 -- nvmf/common.sh@124 -- # return 0 00:30:32.101 10:27:05 -- nvmf/common.sh@477 -- # '[' -n 3608362 ']' 00:30:32.101 10:27:05 -- nvmf/common.sh@478 -- # killprocess 3608362 00:30:32.101 10:27:05 -- common/autotest_common.sh@926 -- # '[' -z 3608362 ']' 00:30:32.101 10:27:05 -- common/autotest_common.sh@930 -- # kill -0 3608362 00:30:32.101 10:27:05 -- common/autotest_common.sh@931 -- # uname 00:30:32.101 10:27:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:32.101 10:27:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3608362 00:30:32.101 10:27:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:32.101 10:27:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:32.101 10:27:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3608362' 00:30:32.101 killing process with pid 3608362 00:30:32.101 10:27:05 -- common/autotest_common.sh@945 -- # kill 3608362 00:30:32.101 10:27:05 -- common/autotest_common.sh@950 -- # wait 3608362 00:30:32.360 10:27:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:32.360 10:27:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:32.360 10:27:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:32.360 10:27:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:32.360 10:27:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:32.360 10:27:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.360 10:27:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:32.360 10:27:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.896 10:27:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:34.896 00:30:34.896 real 0m35.723s 00:30:34.896 user 2m42.536s 00:30:34.896 sys 0m7.878s 00:30:34.896 10:27:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:34.896 10:27:07 -- common/autotest_common.sh@10 -- # set +x 00:30:34.896 ************************************ 00:30:34.896 END TEST nvmf_fio_host 00:30:34.896 ************************************ 00:30:34.896 10:27:07 -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:34.896 10:27:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:34.896 10:27:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:34.896 10:27:07 -- common/autotest_common.sh@10 -- # set +x 00:30:34.896 ************************************ 00:30:34.896 START TEST nvmf_failover 00:30:34.896 ************************************ 00:30:34.896 10:27:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:34.896 * Looking for test storage... 00:30:34.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:34.896 10:27:07 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:34.896 10:27:07 -- nvmf/common.sh@7 -- # uname -s 00:30:34.896 10:27:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:34.896 10:27:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:34.896 10:27:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:34.896 10:27:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:34.896 10:27:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:34.896 10:27:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:34.896 10:27:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:34.896 10:27:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:34.896 10:27:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:34.896 10:27:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:34.896 10:27:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:34.896 10:27:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:34.896 10:27:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:34.896 10:27:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:34.896 10:27:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:34.896 10:27:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:34.896 10:27:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:34.896 10:27:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:34.896 10:27:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:34.896 10:27:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.896 10:27:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.896 10:27:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.896 10:27:07 -- paths/export.sh@5 -- # export PATH 00:30:34.896 10:27:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.896 10:27:07 -- nvmf/common.sh@46 -- # : 0 00:30:34.896 10:27:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:34.896 10:27:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:34.896 10:27:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:34.896 10:27:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:34.896 10:27:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:34.896 10:27:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:34.896 10:27:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:34.896 10:27:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:34.896 10:27:07 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:34.896 10:27:07 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:34.896 10:27:07 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:34.896 10:27:07 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:34.896 10:27:07 -- host/failover.sh@18 -- # nvmftestinit 00:30:34.896 10:27:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:34.896 10:27:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:34.896 10:27:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:34.896 10:27:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:34.896 10:27:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:34.896 10:27:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.896 10:27:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:34.896 10:27:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.896 10:27:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:34.896 10:27:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:34.896 10:27:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:34.896 10:27:07 -- common/autotest_common.sh@10 -- # set +x 00:30:40.264 10:27:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:40.264 10:27:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:40.264 10:27:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:40.264 10:27:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:40.264 10:27:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:40.264 10:27:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:40.264 10:27:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:40.264 10:27:13 -- nvmf/common.sh@294 -- # net_devs=() 00:30:40.264 10:27:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:40.264 10:27:13 -- nvmf/common.sh@295 -- # e810=() 00:30:40.264 10:27:13 -- nvmf/common.sh@295 -- # local -ga e810 00:30:40.264 10:27:13 -- nvmf/common.sh@296 -- # x722=() 00:30:40.264 10:27:13 -- nvmf/common.sh@296 -- # local -ga x722 00:30:40.264 10:27:13 -- nvmf/common.sh@297 -- # mlx=() 00:30:40.264 10:27:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:40.264 10:27:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:40.264 10:27:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:40.264 10:27:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:40.264 10:27:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:40.264 10:27:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:40.264 10:27:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:40.264 10:27:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:40.264 10:27:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:40.264 10:27:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:40.264 10:27:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:40.264 10:27:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:40.264 10:27:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:40.264 10:27:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:40.264 10:27:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:40.264 10:27:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:40.264 10:27:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:40.264 10:27:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:40.264 10:27:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:40.264 10:27:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:40.264 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:40.264 10:27:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:40.264 10:27:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:40.264 10:27:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.264 10:27:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.264 10:27:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:40.264 10:27:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:40.264 10:27:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:40.264 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:40.264 10:27:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:40.264 10:27:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:40.264 10:27:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.264 10:27:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.264 10:27:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:40.264 10:27:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:40.264 10:27:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:40.264 10:27:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:40.264 10:27:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:40.264 10:27:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.264 10:27:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:40.264 10:27:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.264 10:27:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:40.264 Found net devices under 0000:af:00.0: cvl_0_0 00:30:40.264 10:27:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.264 10:27:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:40.264 10:27:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.264 10:27:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:40.264 10:27:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.264 10:27:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:40.264 Found net devices under 0000:af:00.1: cvl_0_1 00:30:40.264 10:27:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.264 10:27:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:40.264 10:27:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:40.264 10:27:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:40.264 10:27:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:40.264 10:27:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:40.264 10:27:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:40.264 10:27:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:40.264 10:27:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:40.264 10:27:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:40.264 10:27:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:40.264 10:27:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:40.264 10:27:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:40.264 10:27:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:40.264 10:27:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:40.264 10:27:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:40.264 10:27:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:40.264 10:27:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:40.264 10:27:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:40.264 10:27:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:40.264 10:27:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:40.264 10:27:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:40.264 10:27:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:40.264 10:27:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:40.264 10:27:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:40.264 10:27:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:40.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:40.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:30:40.264 00:30:40.264 --- 10.0.0.2 ping statistics --- 00:30:40.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.264 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:30:40.264 10:27:13 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:40.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:40.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:30:40.264 00:30:40.264 --- 10.0.0.1 ping statistics --- 00:30:40.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.264 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:30:40.264 10:27:13 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:40.264 10:27:13 -- nvmf/common.sh@410 -- # return 0 00:30:40.264 10:27:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:40.264 10:27:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:40.264 10:27:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:40.264 10:27:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:40.264 10:27:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:40.264 10:27:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:40.264 10:27:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:40.264 10:27:13 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:40.264 10:27:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:40.264 10:27:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:40.264 10:27:13 -- common/autotest_common.sh@10 -- # set +x 00:30:40.264 10:27:13 -- nvmf/common.sh@469 -- # nvmfpid=3617862 00:30:40.264 10:27:13 -- nvmf/common.sh@470 -- # waitforlisten 3617862 00:30:40.264 10:27:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:40.264 10:27:13 -- common/autotest_common.sh@819 -- # '[' -z 3617862 ']' 00:30:40.264 10:27:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:40.264 10:27:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:40.264 10:27:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:40.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:40.264 10:27:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:40.265 10:27:13 -- common/autotest_common.sh@10 -- # set +x 00:30:40.265 [2024-04-17 10:27:13.554023] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:30:40.265 [2024-04-17 10:27:13.554062] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:40.265 EAL: No free 2048 kB hugepages reported on node 1 00:30:40.524 [2024-04-17 10:27:13.621102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:40.524 [2024-04-17 10:27:13.708196] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:40.524 [2024-04-17 10:27:13.708342] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:40.524 [2024-04-17 10:27:13.708354] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:40.524 [2024-04-17 10:27:13.708363] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:40.524 [2024-04-17 10:27:13.708466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:40.524 [2024-04-17 10:27:13.708580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:40.524 [2024-04-17 10:27:13.708580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.459 10:27:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:41.459 10:27:14 -- common/autotest_common.sh@852 -- # return 0 00:30:41.459 10:27:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:41.459 10:27:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:41.459 10:27:14 -- common/autotest_common.sh@10 -- # set +x 00:30:41.459 10:27:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:41.459 10:27:14 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:41.459 [2024-04-17 10:27:14.747935] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:41.459 10:27:14 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:41.718 Malloc0 00:30:41.718 10:27:15 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:41.977 10:27:15 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:42.235 10:27:15 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:42.494 [2024-04-17 10:27:15.755281] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:42.494 10:27:15 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:42.753 [2024-04-17 10:27:15.992067] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:42.753 10:27:16 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:43.012 [2024-04-17 10:27:16.232940] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:43.012 10:27:16 -- host/failover.sh@31 -- # bdevperf_pid=3618282 00:30:43.012 10:27:16 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:43.012 10:27:16 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:43.012 10:27:16 -- host/failover.sh@34 -- # waitforlisten 3618282 /var/tmp/bdevperf.sock 00:30:43.012 10:27:16 -- common/autotest_common.sh@819 -- # '[' -z 3618282 ']' 00:30:43.012 10:27:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:43.012 10:27:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:43.012 10:27:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:43.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:43.012 10:27:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:43.012 10:27:16 -- common/autotest_common.sh@10 -- # set +x 00:30:43.946 10:27:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:43.946 10:27:17 -- common/autotest_common.sh@852 -- # return 0 00:30:43.946 10:27:17 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:44.204 NVMe0n1 00:30:44.463 10:27:17 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:44.722 00:30:44.722 10:27:17 -- host/failover.sh@39 -- # run_test_pid=3618644 00:30:44.722 10:27:17 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:44.722 10:27:17 -- host/failover.sh@41 -- # sleep 1 00:30:45.658 10:27:18 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:45.918 [2024-04-17 10:27:19.152666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152768] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152828] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 [2024-04-17 10:27:19.152922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d74e0 is same with the state(5) to be set 00:30:45.918 10:27:19 -- host/failover.sh@45 -- # sleep 3 00:30:49.204 10:27:22 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:49.462 00:30:49.462 10:27:22 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:49.722 [2024-04-17 10:27:22.807483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807671] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807680] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807744] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807865] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.807996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.808005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.808014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.808023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.808032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.808041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.722 [2024-04-17 10:27:22.808049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.723 [2024-04-17 10:27:22.808059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.723 [2024-04-17 10:27:22.808067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.723 [2024-04-17 10:27:22.808076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.723 [2024-04-17 10:27:22.808084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.723 [2024-04-17 10:27:22.808093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.723 [2024-04-17 10:27:22.808102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.723 [2024-04-17 10:27:22.808111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.723 [2024-04-17 10:27:22.808120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.723 [2024-04-17 10:27:22.808129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d8390 is same with the state(5) to be set 00:30:49.723 10:27:22 -- host/failover.sh@50 -- # sleep 3 00:30:53.019 10:27:25 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:53.019 [2024-04-17 10:27:26.062925] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:53.019 10:27:26 -- host/failover.sh@55 -- # sleep 1 00:30:53.954 10:27:27 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:54.213 [2024-04-17 10:27:27.320188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320453] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.213 [2024-04-17 10:27:27.320524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.214 [2024-04-17 10:27:27.320535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.214 [2024-04-17 10:27:27.320544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d9080 is same with the state(5) to be set 00:30:54.214 10:27:27 -- host/failover.sh@59 -- # wait 3618644 00:31:00.789 0 00:31:00.789 10:27:33 -- host/failover.sh@61 -- # killprocess 3618282 00:31:00.789 10:27:33 -- common/autotest_common.sh@926 -- # '[' -z 3618282 ']' 00:31:00.789 10:27:33 -- common/autotest_common.sh@930 -- # kill -0 3618282 00:31:00.789 10:27:33 -- common/autotest_common.sh@931 -- # uname 00:31:00.789 10:27:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:00.789 10:27:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3618282 00:31:00.789 10:27:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:00.789 10:27:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:00.789 10:27:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3618282' 00:31:00.789 killing process with pid 3618282 00:31:00.789 10:27:33 -- common/autotest_common.sh@945 -- # kill 3618282 00:31:00.789 10:27:33 -- common/autotest_common.sh@950 -- # wait 3618282 00:31:00.789 10:27:33 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:00.789 [2024-04-17 10:27:16.298590] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:31:00.789 [2024-04-17 10:27:16.298675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3618282 ] 00:31:00.789 EAL: No free 2048 kB hugepages reported on node 1 00:31:00.789 [2024-04-17 10:27:16.381467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.789 [2024-04-17 10:27:16.465922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.789 Running I/O for 15 seconds... 00:31:00.789 [2024-04-17 10:27:19.153690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.789 [2024-04-17 10:27:19.153732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.153753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.789 [2024-04-17 10:27:19.153764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.153777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.789 [2024-04-17 10:27:19.153787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.153800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.789 [2024-04-17 10:27:19.153810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.153822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.789 [2024-04-17 10:27:19.153831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.153843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.789 [2024-04-17 10:27:19.153853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.153865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.789 [2024-04-17 10:27:19.153874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.153885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.789 [2024-04-17 10:27:19.153895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.153907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.789 [2024-04-17 10:27:19.153917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.153929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.789 [2024-04-17 10:27:19.153938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.153951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.789 [2024-04-17 10:27:19.153961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.153980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.789 [2024-04-17 10:27:19.153990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.154002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.789 [2024-04-17 10:27:19.154012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.154024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.789 [2024-04-17 10:27:19.154033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.154045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.789 [2024-04-17 10:27:19.154054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.154066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.789 [2024-04-17 10:27:19.154076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.154088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.789 [2024-04-17 10:27:19.154098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.154110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.789 [2024-04-17 10:27:19.154120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.154132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.789 [2024-04-17 10:27:19.154142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.154155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.789 [2024-04-17 10:27:19.154164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.154176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.789 [2024-04-17 10:27:19.154185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.154198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.789 [2024-04-17 10:27:19.154208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.154220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.789 [2024-04-17 10:27:19.154229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.154241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.789 [2024-04-17 10:27:19.154253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.154265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.789 [2024-04-17 10:27:19.154274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.154286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.789 [2024-04-17 10:27:19.154295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.154309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.789 [2024-04-17 10:27:19.154319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.154331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.789 [2024-04-17 10:27:19.154340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.154352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.789 [2024-04-17 10:27:19.154362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.154374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.789 [2024-04-17 10:27:19.154383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.154395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.789 [2024-04-17 10:27:19.154406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.154418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.789 [2024-04-17 10:27:19.154428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.154439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.789 [2024-04-17 10:27:19.154449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.789 [2024-04-17 10:27:19.154461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.789 [2024-04-17 10:27:19.154471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.154483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.790 [2024-04-17 10:27:19.154493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.154504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.154514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.154528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.154538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.154550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.154559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.154571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.154581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.154593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.154603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.154614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.154624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.154636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.154651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.154664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.154674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.154685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.154695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.154707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.154717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.154729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.790 [2024-04-17 10:27:19.154738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.154750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.790 [2024-04-17 10:27:19.154760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.154771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.154781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.154793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.154805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.154817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.154826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.154838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.154848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.154859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.154869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.154882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.154891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.154903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.154912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.154924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.154934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.154946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.154956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.154968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.154977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.154989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.154998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.790 [2024-04-17 10:27:19.155187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.790 [2024-04-17 10:27:19.155211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.790 [2024-04-17 10:27:19.155235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.790 [2024-04-17 10:27:19.155324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.790 [2024-04-17 10:27:19.155345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.790 [2024-04-17 10:27:19.155366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.790 [2024-04-17 10:27:19.155519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.790 [2024-04-17 10:27:19.155540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.790 [2024-04-17 10:27:19.155627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.790 [2024-04-17 10:27:19.155725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.790 [2024-04-17 10:27:19.155811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.790 [2024-04-17 10:27:19.155832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.790 [2024-04-17 10:27:19.155854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.155987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.155999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.156009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.156021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.156030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.156042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.156052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.156064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.790 [2024-04-17 10:27:19.156074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.156086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.156095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.156107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.790 [2024-04-17 10:27:19.156116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.156128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.156138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.156149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.156159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.156172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.790 [2024-04-17 10:27:19.156181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.156193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.156204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.156222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.156233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.156247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.790 [2024-04-17 10:27:19.156261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.156275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.156290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.156304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.790 [2024-04-17 10:27:19.156315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.156330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.790 [2024-04-17 10:27:19.156339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.156351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.790 [2024-04-17 10:27:19.156361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.156373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.790 [2024-04-17 10:27:19.156384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.790 [2024-04-17 10:27:19.156395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:19.156405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:19.156416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.791 [2024-04-17 10:27:19.156427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:19.156439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.791 [2024-04-17 10:27:19.156448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:19.156459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.791 [2024-04-17 10:27:19.156469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:19.156481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:19.156490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:19.156502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.791 [2024-04-17 10:27:19.156513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:19.156525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:19.156535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:19.156547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:19.156556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:19.156590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:00.791 [2024-04-17 10:27:19.156600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:00.791 [2024-04-17 10:27:19.156608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18288 len:8 PRP1 0x0 PRP2 0x0 00:31:00.791 [2024-04-17 10:27:19.156617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:19.156671] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6a7550 was disconnected and freed. reset controller. 00:31:00.791 [2024-04-17 10:27:19.156691] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:00.791 [2024-04-17 10:27:19.156718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.791 [2024-04-17 10:27:19.156731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:19.156741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.791 [2024-04-17 10:27:19.156750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:19.156761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.791 [2024-04-17 10:27:19.156770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:19.156781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.791 [2024-04-17 10:27:19.156791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:19.156800] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:00.791 [2024-04-17 10:27:19.156837] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b1550 (9): Bad file descriptor 00:31:00.791 [2024-04-17 10:27:19.159788] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:00.791 [2024-04-17 10:27:19.193124] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:00.791 [2024-04-17 10:27:22.808340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.808380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.808398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.808410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.808428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.808438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.808450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.808460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.808472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.808482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.808494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.808505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.808517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.808526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.808538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.808548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.808560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.808570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.808582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.808592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.808604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.808613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.808625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.808635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.808656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.808667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.808679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.808689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.808701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.808713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.808726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.808737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.808749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.808760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.808773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.808782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.808795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.808804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.808816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.808826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.808838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.808848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.808860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.808869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.808881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.808892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.808904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.808913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.808925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.808935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.808947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.808956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.808968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.808978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.808992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.809002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.809023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.809046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.809068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.809090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.809112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.809134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.809156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.809178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.809200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.809221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.791 [2024-04-17 10:27:22.809243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.809271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.809295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.809317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.809338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.809360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.809381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.809403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.809424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.809446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.809468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.791 [2024-04-17 10:27:22.809490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.809512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.791 [2024-04-17 10:27:22.809536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.809561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.791 [2024-04-17 10:27:22.809583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.791 [2024-04-17 10:27:22.809605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:104144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.791 [2024-04-17 10:27:22.809626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.791 [2024-04-17 10:27:22.809638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:104152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.809653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.809665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.792 [2024-04-17 10:27:22.809675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.809687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.792 [2024-04-17 10:27:22.809697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.809709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.809719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.809731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.809740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.809752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.792 [2024-04-17 10:27:22.809762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.809774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.809783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.809795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.792 [2024-04-17 10:27:22.809805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.809817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.809829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.809841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:104224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.809853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.809865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.809875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.809887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.809896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.809908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.809918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.809930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.809940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.809952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:103696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.809961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.809973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.809983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.809994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.810004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.810026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.792 [2024-04-17 10:27:22.810047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.792 [2024-04-17 10:27:22.810068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.792 [2024-04-17 10:27:22.810089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.792 [2024-04-17 10:27:22.810114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.810135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.792 [2024-04-17 10:27:22.810156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.810179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.792 [2024-04-17 10:27:22.810200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.810222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.792 [2024-04-17 10:27:22.810244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.792 [2024-04-17 10:27:22.810265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.810286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.810308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.810329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.792 [2024-04-17 10:27:22.810350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:104352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.810372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:104360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.792 [2024-04-17 10:27:22.810395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.810416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.810438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:103776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.810460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.810483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.810505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.810529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:103840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.810550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:103872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.810571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.810593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.810614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:104384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.792 [2024-04-17 10:27:22.810636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.792 [2024-04-17 10:27:22.810664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.810686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:104408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.792 [2024-04-17 10:27:22.810707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.792 [2024-04-17 10:27:22.810728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.792 [2024-04-17 10:27:22.810750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:104432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.810771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.792 [2024-04-17 10:27:22.810792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.810813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.792 [2024-04-17 10:27:22.810834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.792 [2024-04-17 10:27:22.810856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.792 [2024-04-17 10:27:22.810879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.810900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.792 [2024-04-17 10:27:22.810922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.810945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.792 [2024-04-17 10:27:22.810968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.810980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.810989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.811001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.811010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.811023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.811034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.811046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.811055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.811067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.811076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.811088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.811097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.811110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.811119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.811130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.811140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.811152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.811161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.811173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:22.811182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.811193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bdaa0 is same with the state(5) to be set 00:31:00.792 [2024-04-17 10:27:22.811205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:00.792 [2024-04-17 10:27:22.811213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:00.792 [2024-04-17 10:27:22.811223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104056 len:8 PRP1 0x0 PRP2 0x0 00:31:00.792 [2024-04-17 10:27:22.811232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.811278] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6bdaa0 was disconnected and freed. reset controller. 00:31:00.792 [2024-04-17 10:27:22.811290] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:00.792 [2024-04-17 10:27:22.811316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.792 [2024-04-17 10:27:22.811327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.811337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.792 [2024-04-17 10:27:22.811346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.811357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.792 [2024-04-17 10:27:22.811366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.811376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.792 [2024-04-17 10:27:22.811385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:22.811394] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:00.792 [2024-04-17 10:27:22.814104] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:00.792 [2024-04-17 10:27:22.814136] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b1550 (9): Bad file descriptor 00:31:00.792 [2024-04-17 10:27:22.931248] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:00.792 [2024-04-17 10:27:27.320690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:27.320731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:27.320751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:32440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:27.320762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:27.320775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:27.320785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:27.320797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:32472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:27.320808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.792 [2024-04-17 10:27:27.320820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.792 [2024-04-17 10:27:27.320836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.320848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.320858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.320870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:32504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.320880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.320892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:32512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.320902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.320914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.320924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.320936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.320946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.320959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:32576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.320968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.320980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:32584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.320989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:32640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:32696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:33280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:32752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:32784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:32824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:32840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:33376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:33424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.793 [2024-04-17 10:27:27.321523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:33440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.793 [2024-04-17 10:27:27.321566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.793 [2024-04-17 10:27:27.321588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:33456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.793 [2024-04-17 10:27:27.321609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:33472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.793 [2024-04-17 10:27:27.321659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.793 [2024-04-17 10:27:27.321683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:33496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:33504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.793 [2024-04-17 10:27:27.321772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.793 [2024-04-17 10:27:27.321794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.793 [2024-04-17 10:27:27.321815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:32864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:32872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:32904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:32920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.321982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:32936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.321993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.322005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:33536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.322015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.322027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:33544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.322036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.322048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:33552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.793 [2024-04-17 10:27:27.322058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.322070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.793 [2024-04-17 10:27:27.322079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.322091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:33568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.322101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.322113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.322123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.322134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:33584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.793 [2024-04-17 10:27:27.322144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.322156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.793 [2024-04-17 10:27:27.322166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.322178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.793 [2024-04-17 10:27:27.322187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.322201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.793 [2024-04-17 10:27:27.322211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.322224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.322235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.322247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:32960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.322257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.322269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.322279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.322291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:32992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.322300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.322312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:33008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.322322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.322334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.322344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.322356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.322365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.322377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:33048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.322387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.322399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.793 [2024-04-17 10:27:27.322409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.322421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.322430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.322442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:33632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.322452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.322464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.322474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.322485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:33088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.793 [2024-04-17 10:27:27.322495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.793 [2024-04-17 10:27:27.322507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:33104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.322519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.322531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.322540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.322553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:33120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.322565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.322578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:33128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.322589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.322603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.322613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.322625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:33152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.322636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.322656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:33192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.322668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.322680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.794 [2024-04-17 10:27:27.322690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.322702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.794 [2024-04-17 10:27:27.322714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.322727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.794 [2024-04-17 10:27:27.322738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.322750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:33672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.794 [2024-04-17 10:27:27.322760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.322772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:33680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.794 [2024-04-17 10:27:27.322782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.322795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:33688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.322805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.322822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:33696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.794 [2024-04-17 10:27:27.322832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.322844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.794 [2024-04-17 10:27:27.322854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.322866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.794 [2024-04-17 10:27:27.322876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.322888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:33720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.322898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.322910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.322920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.322932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.794 [2024-04-17 10:27:27.322942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.322955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:33744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.794 [2024-04-17 10:27:27.322964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.322976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.794 [2024-04-17 10:27:27.322986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.322998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:33760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.794 [2024-04-17 10:27:27.323008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:33768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.323030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.794 [2024-04-17 10:27:27.323052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:33784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.323073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.323097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:33216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.323119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:33224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.323141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:33232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.323165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.323187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.323208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:33272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.323230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:33296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.323252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.323273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.794 [2024-04-17 10:27:27.323295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:33808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.794 [2024-04-17 10:27:27.323318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.794 [2024-04-17 10:27:27.323339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:33824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:00.794 [2024-04-17 10:27:27.323360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:33832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.323384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:33840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.323405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:33312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.323427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.323448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:33336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.323470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:33344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.323492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:33352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.323514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:33360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.323538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:00.794 [2024-04-17 10:27:27.323560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ad320 is same with the state(5) to be set 00:31:00.794 [2024-04-17 10:27:27.323583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:00.794 [2024-04-17 10:27:27.323591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:00.794 [2024-04-17 10:27:27.323600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33408 len:8 PRP1 0x0 PRP2 0x0 00:31:00.794 [2024-04-17 10:27:27.323609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323662] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6ad320 was disconnected and freed. reset controller. 00:31:00.794 [2024-04-17 10:27:27.323675] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:00.794 [2024-04-17 10:27:27.323701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.794 [2024-04-17 10:27:27.323712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.794 [2024-04-17 10:27:27.323735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.794 [2024-04-17 10:27:27.323754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:00.794 [2024-04-17 10:27:27.323775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:00.794 [2024-04-17 10:27:27.323784] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:00.794 [2024-04-17 10:27:27.326904] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:00.794 [2024-04-17 10:27:27.326936] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b1550 (9): Bad file descriptor 00:31:00.794 [2024-04-17 10:27:27.358669] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:00.794 00:31:00.794 Latency(us) 00:31:00.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:00.794 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:00.794 Verification LBA range: start 0x0 length 0x4000 00:31:00.794 NVMe0n1 : 15.01 11877.93 46.40 568.54 0.00 10263.61 711.21 15966.95 00:31:00.794 =================================================================================================================== 00:31:00.794 Total : 11877.93 46.40 568.54 0.00 10263.61 711.21 15966.95 00:31:00.794 Received shutdown signal, test time was about 15.000000 seconds 00:31:00.794 00:31:00.794 Latency(us) 00:31:00.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:00.794 =================================================================================================================== 00:31:00.794 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:00.794 10:27:33 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:00.794 10:27:33 -- host/failover.sh@65 -- # count=3 00:31:00.794 10:27:33 -- host/failover.sh@67 -- # (( count != 3 )) 00:31:00.794 10:27:33 -- host/failover.sh@73 -- # bdevperf_pid=3621451 00:31:00.794 10:27:33 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:00.794 10:27:33 -- host/failover.sh@75 -- # waitforlisten 3621451 /var/tmp/bdevperf.sock 00:31:00.794 10:27:33 -- common/autotest_common.sh@819 -- # '[' -z 3621451 ']' 00:31:00.794 10:27:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:00.794 10:27:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:00.794 10:27:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:00.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:00.794 10:27:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:00.794 10:27:33 -- common/autotest_common.sh@10 -- # set +x 00:31:01.054 10:27:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:01.054 10:27:34 -- common/autotest_common.sh@852 -- # return 0 00:31:01.054 10:27:34 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:01.316 [2024-04-17 10:27:34.582083] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:01.316 10:27:34 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:01.574 [2024-04-17 10:27:34.830876] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:01.574 10:27:34 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:02.141 NVMe0n1 00:31:02.141 10:27:35 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:02.399 00:31:02.399 10:27:35 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:02.657 00:31:02.657 10:27:35 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:02.657 10:27:35 -- host/failover.sh@82 -- # grep -q NVMe0 00:31:02.915 10:27:36 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:03.174 10:27:36 -- host/failover.sh@87 -- # sleep 3 00:31:06.460 10:27:39 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:06.461 10:27:39 -- host/failover.sh@88 -- # grep -q NVMe0 00:31:06.461 10:27:39 -- host/failover.sh@90 -- # run_test_pid=3622526 00:31:06.461 10:27:39 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:06.461 10:27:39 -- host/failover.sh@92 -- # wait 3622526 00:31:07.838 0 00:31:07.838 10:27:40 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:07.838 [2024-04-17 10:27:33.420672] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:31:07.838 [2024-04-17 10:27:33.420737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3621451 ] 00:31:07.838 EAL: No free 2048 kB hugepages reported on node 1 00:31:07.838 [2024-04-17 10:27:33.502851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.838 [2024-04-17 10:27:33.584778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.838 [2024-04-17 10:27:36.424590] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:07.838 [2024-04-17 10:27:36.424641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:07.838 [2024-04-17 10:27:36.424662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.838 [2024-04-17 10:27:36.424674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:07.838 [2024-04-17 10:27:36.424684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.838 [2024-04-17 10:27:36.424695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:07.838 [2024-04-17 10:27:36.424705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.838 [2024-04-17 10:27:36.424715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:07.838 [2024-04-17 10:27:36.424725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:07.838 [2024-04-17 10:27:36.424735] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:07.838 [2024-04-17 10:27:36.424763] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:07.838 [2024-04-17 10:27:36.424782] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a4550 (9): Bad file descriptor 00:31:07.838 [2024-04-17 10:27:36.434992] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:07.838 Running I/O for 1 seconds... 00:31:07.838 00:31:07.838 Latency(us) 00:31:07.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:07.838 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:07.838 Verification LBA range: start 0x0 length 0x4000 00:31:07.838 NVMe0n1 : 1.01 11573.85 45.21 0.00 0.00 11002.87 1094.75 12332.68 00:31:07.838 =================================================================================================================== 00:31:07.838 Total : 11573.85 45.21 0.00 0.00 11002.87 1094.75 12332.68 00:31:07.838 10:27:40 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:07.838 10:27:40 -- host/failover.sh@95 -- # grep -q NVMe0 00:31:07.838 10:27:41 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:08.097 10:27:41 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:08.097 10:27:41 -- host/failover.sh@99 -- # grep -q NVMe0 00:31:08.097 10:27:41 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:08.355 10:27:41 -- host/failover.sh@101 -- # sleep 3 00:31:11.640 10:27:44 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:11.640 10:27:44 -- host/failover.sh@103 -- # grep -q NVMe0 00:31:11.640 10:27:44 -- host/failover.sh@108 -- # killprocess 3621451 00:31:11.640 10:27:44 -- common/autotest_common.sh@926 -- # '[' -z 3621451 ']' 00:31:11.640 10:27:44 -- common/autotest_common.sh@930 -- # kill -0 3621451 00:31:11.640 10:27:44 -- common/autotest_common.sh@931 -- # uname 00:31:11.640 10:27:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:11.640 10:27:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3621451 00:31:11.898 10:27:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:11.898 10:27:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:11.898 10:27:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3621451' 00:31:11.898 killing process with pid 3621451 00:31:11.898 10:27:44 -- common/autotest_common.sh@945 -- # kill 3621451 00:31:11.898 10:27:44 -- common/autotest_common.sh@950 -- # wait 3621451 00:31:11.898 10:27:45 -- host/failover.sh@110 -- # sync 00:31:11.898 10:27:45 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:12.156 10:27:45 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:12.156 10:27:45 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:12.156 10:27:45 -- host/failover.sh@116 -- # nvmftestfini 00:31:12.156 10:27:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:12.156 10:27:45 -- nvmf/common.sh@116 -- # sync 00:31:12.156 10:27:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:12.156 10:27:45 -- nvmf/common.sh@119 -- # set +e 00:31:12.156 10:27:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:12.156 10:27:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:12.156 rmmod nvme_tcp 00:31:12.156 rmmod nvme_fabrics 00:31:12.413 rmmod nvme_keyring 00:31:12.413 10:27:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:12.413 10:27:45 -- nvmf/common.sh@123 -- # set -e 00:31:12.413 10:27:45 -- nvmf/common.sh@124 -- # return 0 00:31:12.413 10:27:45 -- nvmf/common.sh@477 -- # '[' -n 3617862 ']' 00:31:12.413 10:27:45 -- nvmf/common.sh@478 -- # killprocess 3617862 00:31:12.413 10:27:45 -- common/autotest_common.sh@926 -- # '[' -z 3617862 ']' 00:31:12.413 10:27:45 -- common/autotest_common.sh@930 -- # kill -0 3617862 00:31:12.413 10:27:45 -- common/autotest_common.sh@931 -- # uname 00:31:12.413 10:27:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:12.413 10:27:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3617862 00:31:12.413 10:27:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:12.413 10:27:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:12.413 10:27:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3617862' 00:31:12.413 killing process with pid 3617862 00:31:12.413 10:27:45 -- common/autotest_common.sh@945 -- # kill 3617862 00:31:12.413 10:27:45 -- common/autotest_common.sh@950 -- # wait 3617862 00:31:12.673 10:27:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:12.673 10:27:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:12.673 10:27:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:12.673 10:27:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:12.673 10:27:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:12.673 10:27:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.673 10:27:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:12.673 10:27:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.574 10:27:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:14.574 00:31:14.574 real 0m40.180s 00:31:14.574 user 2m11.238s 00:31:14.574 sys 0m7.768s 00:31:14.574 10:27:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:14.574 10:27:47 -- common/autotest_common.sh@10 -- # set +x 00:31:14.574 ************************************ 00:31:14.574 END TEST nvmf_failover 00:31:14.574 ************************************ 00:31:14.836 10:27:47 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:14.836 10:27:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:14.836 10:27:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:14.836 10:27:47 -- common/autotest_common.sh@10 -- # set +x 00:31:14.836 ************************************ 00:31:14.836 START TEST nvmf_discovery 00:31:14.836 ************************************ 00:31:14.836 10:27:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:14.836 * Looking for test storage... 00:31:14.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:14.836 10:27:48 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:14.836 10:27:48 -- nvmf/common.sh@7 -- # uname -s 00:31:14.836 10:27:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:14.836 10:27:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:14.836 10:27:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:14.836 10:27:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:14.836 10:27:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:14.836 10:27:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:14.836 10:27:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:14.836 10:27:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:14.836 10:27:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:14.836 10:27:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:14.836 10:27:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:31:14.836 10:27:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:31:14.836 10:27:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:14.836 10:27:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:14.836 10:27:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:14.836 10:27:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:14.836 10:27:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:14.836 10:27:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:14.836 10:27:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:14.836 10:27:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.837 10:27:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.837 10:27:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.837 10:27:48 -- paths/export.sh@5 -- # export PATH 00:31:14.837 10:27:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.837 10:27:48 -- nvmf/common.sh@46 -- # : 0 00:31:14.837 10:27:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:14.837 10:27:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:14.837 10:27:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:14.837 10:27:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:14.837 10:27:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:14.837 10:27:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:14.837 10:27:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:14.837 10:27:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:14.837 10:27:48 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:14.837 10:27:48 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:14.837 10:27:48 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:14.837 10:27:48 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:14.837 10:27:48 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:14.837 10:27:48 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:14.837 10:27:48 -- host/discovery.sh@25 -- # nvmftestinit 00:31:14.837 10:27:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:14.837 10:27:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:14.837 10:27:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:14.837 10:27:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:14.837 10:27:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:14.837 10:27:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.837 10:27:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:14.837 10:27:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.837 10:27:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:14.837 10:27:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:14.837 10:27:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:14.837 10:27:48 -- common/autotest_common.sh@10 -- # set +x 00:31:20.163 10:27:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:20.163 10:27:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:20.163 10:27:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:20.163 10:27:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:20.163 10:27:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:20.163 10:27:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:20.163 10:27:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:20.163 10:27:53 -- nvmf/common.sh@294 -- # net_devs=() 00:31:20.163 10:27:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:20.163 10:27:53 -- nvmf/common.sh@295 -- # e810=() 00:31:20.163 10:27:53 -- nvmf/common.sh@295 -- # local -ga e810 00:31:20.163 10:27:53 -- nvmf/common.sh@296 -- # x722=() 00:31:20.163 10:27:53 -- nvmf/common.sh@296 -- # local -ga x722 00:31:20.163 10:27:53 -- nvmf/common.sh@297 -- # mlx=() 00:31:20.163 10:27:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:20.163 10:27:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:20.163 10:27:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:20.163 10:27:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:20.163 10:27:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:20.163 10:27:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:20.163 10:27:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:20.163 10:27:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:20.163 10:27:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:20.163 10:27:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:20.163 10:27:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:20.163 10:27:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:20.163 10:27:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:20.163 10:27:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:20.163 10:27:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:20.163 10:27:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:20.163 10:27:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:20.163 10:27:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:20.163 10:27:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:20.163 10:27:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:20.163 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:20.163 10:27:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:20.163 10:27:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:20.163 10:27:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.163 10:27:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.163 10:27:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:20.163 10:27:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:20.163 10:27:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:20.163 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:20.163 10:27:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:20.163 10:27:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:20.163 10:27:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.163 10:27:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.163 10:27:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:20.163 10:27:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:20.163 10:27:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:20.163 10:27:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:20.163 10:27:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:20.163 10:27:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.163 10:27:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:20.163 10:27:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.163 10:27:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:20.163 Found net devices under 0000:af:00.0: cvl_0_0 00:31:20.163 10:27:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.163 10:27:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:20.163 10:27:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.163 10:27:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:20.163 10:27:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.163 10:27:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:20.163 Found net devices under 0000:af:00.1: cvl_0_1 00:31:20.163 10:27:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.163 10:27:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:20.163 10:27:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:20.163 10:27:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:20.163 10:27:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:20.163 10:27:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:20.163 10:27:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:20.163 10:27:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:20.163 10:27:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:20.163 10:27:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:20.163 10:27:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:20.164 10:27:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:20.164 10:27:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:20.164 10:27:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:20.164 10:27:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:20.164 10:27:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:20.164 10:27:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:20.164 10:27:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:20.164 10:27:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:20.421 10:27:53 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:20.421 10:27:53 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:20.421 10:27:53 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:20.421 10:27:53 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:20.421 10:27:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:20.421 10:27:53 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:20.421 10:27:53 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:20.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:20.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:31:20.421 00:31:20.421 --- 10.0.0.2 ping statistics --- 00:31:20.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.421 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:31:20.421 10:27:53 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:20.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:20.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:31:20.421 00:31:20.421 --- 10.0.0.1 ping statistics --- 00:31:20.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.421 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:31:20.422 10:27:53 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:20.422 10:27:53 -- nvmf/common.sh@410 -- # return 0 00:31:20.422 10:27:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:20.422 10:27:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:20.422 10:27:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:20.422 10:27:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:20.422 10:27:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:20.422 10:27:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:20.422 10:27:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:20.680 10:27:53 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:20.680 10:27:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:20.680 10:27:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:20.680 10:27:53 -- common/autotest_common.sh@10 -- # set +x 00:31:20.680 10:27:53 -- nvmf/common.sh@469 -- # nvmfpid=3627086 00:31:20.680 10:27:53 -- nvmf/common.sh@470 -- # waitforlisten 3627086 00:31:20.680 10:27:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:20.680 10:27:53 -- common/autotest_common.sh@819 -- # '[' -z 3627086 ']' 00:31:20.680 10:27:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.680 10:27:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:20.680 10:27:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.680 10:27:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:20.680 10:27:53 -- common/autotest_common.sh@10 -- # set +x 00:31:20.680 [2024-04-17 10:27:53.839703] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:31:20.680 [2024-04-17 10:27:53.839759] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:20.680 EAL: No free 2048 kB hugepages reported on node 1 00:31:20.680 [2024-04-17 10:27:53.918153] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.680 [2024-04-17 10:27:54.007210] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:20.680 [2024-04-17 10:27:54.007344] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:20.680 [2024-04-17 10:27:54.007356] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:20.680 [2024-04-17 10:27:54.007365] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:20.680 [2024-04-17 10:27:54.007391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.614 10:27:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:21.614 10:27:54 -- common/autotest_common.sh@852 -- # return 0 00:31:21.614 10:27:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:21.614 10:27:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:21.614 10:27:54 -- common/autotest_common.sh@10 -- # set +x 00:31:21.614 10:27:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:21.614 10:27:54 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:21.614 10:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.614 10:27:54 -- common/autotest_common.sh@10 -- # set +x 00:31:21.614 [2024-04-17 10:27:54.805187] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:21.614 10:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.614 10:27:54 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:21.614 10:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.614 10:27:54 -- common/autotest_common.sh@10 -- # set +x 00:31:21.614 [2024-04-17 10:27:54.817321] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:21.614 10:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.614 10:27:54 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:21.614 10:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.614 10:27:54 -- common/autotest_common.sh@10 -- # set +x 00:31:21.614 null0 00:31:21.614 10:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.614 10:27:54 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:21.614 10:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.614 10:27:54 -- common/autotest_common.sh@10 -- # set +x 00:31:21.614 null1 00:31:21.614 10:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.614 10:27:54 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:21.614 10:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.614 10:27:54 -- common/autotest_common.sh@10 -- # set +x 00:31:21.614 10:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.614 10:27:54 -- host/discovery.sh@45 -- # hostpid=3627369 00:31:21.614 10:27:54 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:21.614 10:27:54 -- host/discovery.sh@46 -- # waitforlisten 3627369 /tmp/host.sock 00:31:21.614 10:27:54 -- common/autotest_common.sh@819 -- # '[' -z 3627369 ']' 00:31:21.614 10:27:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:31:21.614 10:27:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:21.614 10:27:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:21.614 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:21.614 10:27:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:21.614 10:27:54 -- common/autotest_common.sh@10 -- # set +x 00:31:21.614 [2024-04-17 10:27:54.889721] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:31:21.614 [2024-04-17 10:27:54.889773] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3627369 ] 00:31:21.614 EAL: No free 2048 kB hugepages reported on node 1 00:31:21.872 [2024-04-17 10:27:54.971157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.872 [2024-04-17 10:27:55.059783] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:21.872 [2024-04-17 10:27:55.059937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:22.806 10:27:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:22.807 10:27:55 -- common/autotest_common.sh@852 -- # return 0 00:31:22.807 10:27:55 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:22.807 10:27:55 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:22.807 10:27:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.807 10:27:55 -- common/autotest_common.sh@10 -- # set +x 00:31:22.807 10:27:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.807 10:27:55 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:22.807 10:27:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.807 10:27:55 -- common/autotest_common.sh@10 -- # set +x 00:31:22.807 10:27:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.807 10:27:55 -- host/discovery.sh@72 -- # notify_id=0 00:31:22.807 10:27:55 -- host/discovery.sh@78 -- # get_subsystem_names 00:31:22.807 10:27:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:22.807 10:27:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:22.807 10:27:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.807 10:27:55 -- host/discovery.sh@59 -- # sort 00:31:22.807 10:27:55 -- common/autotest_common.sh@10 -- # set +x 00:31:22.807 10:27:55 -- host/discovery.sh@59 -- # xargs 00:31:22.807 10:27:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.807 10:27:55 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:31:22.807 10:27:55 -- host/discovery.sh@79 -- # get_bdev_list 00:31:22.807 10:27:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:22.807 10:27:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:22.807 10:27:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.807 10:27:55 -- host/discovery.sh@55 -- # sort 00:31:22.807 10:27:55 -- common/autotest_common.sh@10 -- # set +x 00:31:22.807 10:27:55 -- host/discovery.sh@55 -- # xargs 00:31:22.807 10:27:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.807 10:27:55 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:31:22.807 10:27:55 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:22.807 10:27:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.807 10:27:55 -- common/autotest_common.sh@10 -- # set +x 00:31:22.807 10:27:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.807 10:27:55 -- host/discovery.sh@82 -- # get_subsystem_names 00:31:22.807 10:27:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:22.807 10:27:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.807 10:27:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:22.807 10:27:55 -- common/autotest_common.sh@10 -- # set +x 00:31:22.807 10:27:55 -- host/discovery.sh@59 -- # sort 00:31:22.807 10:27:55 -- host/discovery.sh@59 -- # xargs 00:31:22.807 10:27:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.807 10:27:56 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:31:22.807 10:27:56 -- host/discovery.sh@83 -- # get_bdev_list 00:31:22.807 10:27:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:22.807 10:27:56 -- host/discovery.sh@55 -- # xargs 00:31:22.807 10:27:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:22.807 10:27:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.807 10:27:56 -- common/autotest_common.sh@10 -- # set +x 00:31:22.807 10:27:56 -- host/discovery.sh@55 -- # sort 00:31:22.807 10:27:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.807 10:27:56 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:22.807 10:27:56 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:22.807 10:27:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.807 10:27:56 -- common/autotest_common.sh@10 -- # set +x 00:31:22.807 10:27:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.807 10:27:56 -- host/discovery.sh@86 -- # get_subsystem_names 00:31:22.807 10:27:56 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:22.807 10:27:56 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:22.807 10:27:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.807 10:27:56 -- common/autotest_common.sh@10 -- # set +x 00:31:22.807 10:27:56 -- host/discovery.sh@59 -- # sort 00:31:22.807 10:27:56 -- host/discovery.sh@59 -- # xargs 00:31:22.807 10:27:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.807 10:27:56 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:31:23.066 10:27:56 -- host/discovery.sh@87 -- # get_bdev_list 00:31:23.066 10:27:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:23.066 10:27:56 -- host/discovery.sh@55 -- # xargs 00:31:23.066 10:27:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:23.066 10:27:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.066 10:27:56 -- host/discovery.sh@55 -- # sort 00:31:23.066 10:27:56 -- common/autotest_common.sh@10 -- # set +x 00:31:23.066 10:27:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.066 10:27:56 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:23.066 10:27:56 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:23.066 10:27:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.066 10:27:56 -- common/autotest_common.sh@10 -- # set +x 00:31:23.066 [2024-04-17 10:27:56.201177] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.066 10:27:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.066 10:27:56 -- host/discovery.sh@92 -- # get_subsystem_names 00:31:23.066 10:27:56 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:23.066 10:27:56 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:23.066 10:27:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.066 10:27:56 -- common/autotest_common.sh@10 -- # set +x 00:31:23.066 10:27:56 -- host/discovery.sh@59 -- # sort 00:31:23.066 10:27:56 -- host/discovery.sh@59 -- # xargs 00:31:23.066 10:27:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.066 10:27:56 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:23.066 10:27:56 -- host/discovery.sh@93 -- # get_bdev_list 00:31:23.066 10:27:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:23.066 10:27:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:23.066 10:27:56 -- host/discovery.sh@55 -- # sort 00:31:23.066 10:27:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.066 10:27:56 -- host/discovery.sh@55 -- # xargs 00:31:23.066 10:27:56 -- common/autotest_common.sh@10 -- # set +x 00:31:23.066 10:27:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.066 10:27:56 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:31:23.066 10:27:56 -- host/discovery.sh@94 -- # get_notification_count 00:31:23.066 10:27:56 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:23.066 10:27:56 -- host/discovery.sh@74 -- # jq '. | length' 00:31:23.066 10:27:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.066 10:27:56 -- common/autotest_common.sh@10 -- # set +x 00:31:23.066 10:27:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.066 10:27:56 -- host/discovery.sh@74 -- # notification_count=0 00:31:23.066 10:27:56 -- host/discovery.sh@75 -- # notify_id=0 00:31:23.066 10:27:56 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:31:23.066 10:27:56 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:23.066 10:27:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.066 10:27:56 -- common/autotest_common.sh@10 -- # set +x 00:31:23.066 10:27:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.066 10:27:56 -- host/discovery.sh@100 -- # sleep 1 00:31:23.632 [2024-04-17 10:27:56.907864] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:23.632 [2024-04-17 10:27:56.907886] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:23.632 [2024-04-17 10:27:56.907903] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:23.890 [2024-04-17 10:27:56.995215] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:23.890 [2024-04-17 10:27:57.098021] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:23.890 [2024-04-17 10:27:57.098046] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:24.148 10:27:57 -- host/discovery.sh@101 -- # get_subsystem_names 00:31:24.148 10:27:57 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:24.148 10:27:57 -- host/discovery.sh@59 -- # sort 00:31:24.148 10:27:57 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:24.148 10:27:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.148 10:27:57 -- common/autotest_common.sh@10 -- # set +x 00:31:24.148 10:27:57 -- host/discovery.sh@59 -- # xargs 00:31:24.148 10:27:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.148 10:27:57 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:24.148 10:27:57 -- host/discovery.sh@102 -- # get_bdev_list 00:31:24.148 10:27:57 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:24.148 10:27:57 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:24.148 10:27:57 -- host/discovery.sh@55 -- # sort 00:31:24.148 10:27:57 -- host/discovery.sh@55 -- # xargs 00:31:24.148 10:27:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.148 10:27:57 -- common/autotest_common.sh@10 -- # set +x 00:31:24.148 10:27:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.148 10:27:57 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:24.148 10:27:57 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:31:24.148 10:27:57 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:24.148 10:27:57 -- host/discovery.sh@63 -- # xargs 00:31:24.148 10:27:57 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:24.148 10:27:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.148 10:27:57 -- common/autotest_common.sh@10 -- # set +x 00:31:24.148 10:27:57 -- host/discovery.sh@63 -- # sort -n 00:31:24.407 10:27:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.407 10:27:57 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:31:24.407 10:27:57 -- host/discovery.sh@104 -- # get_notification_count 00:31:24.407 10:27:57 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:24.407 10:27:57 -- host/discovery.sh@74 -- # jq '. | length' 00:31:24.407 10:27:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.407 10:27:57 -- common/autotest_common.sh@10 -- # set +x 00:31:24.407 10:27:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.407 10:27:57 -- host/discovery.sh@74 -- # notification_count=1 00:31:24.407 10:27:57 -- host/discovery.sh@75 -- # notify_id=1 00:31:24.407 10:27:57 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:31:24.407 10:27:57 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:24.407 10:27:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.407 10:27:57 -- common/autotest_common.sh@10 -- # set +x 00:31:24.407 10:27:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.407 10:27:57 -- host/discovery.sh@109 -- # sleep 1 00:31:25.342 10:27:58 -- host/discovery.sh@110 -- # get_bdev_list 00:31:25.342 10:27:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:25.342 10:27:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:25.342 10:27:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.342 10:27:58 -- host/discovery.sh@55 -- # sort 00:31:25.342 10:27:58 -- common/autotest_common.sh@10 -- # set +x 00:31:25.342 10:27:58 -- host/discovery.sh@55 -- # xargs 00:31:25.342 10:27:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.342 10:27:58 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:25.342 10:27:58 -- host/discovery.sh@111 -- # get_notification_count 00:31:25.342 10:27:58 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:25.342 10:27:58 -- host/discovery.sh@74 -- # jq '. | length' 00:31:25.342 10:27:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.342 10:27:58 -- common/autotest_common.sh@10 -- # set +x 00:31:25.342 10:27:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.601 10:27:58 -- host/discovery.sh@74 -- # notification_count=1 00:31:25.601 10:27:58 -- host/discovery.sh@75 -- # notify_id=2 00:31:25.601 10:27:58 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:31:25.601 10:27:58 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:25.601 10:27:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.601 10:27:58 -- common/autotest_common.sh@10 -- # set +x 00:31:25.601 [2024-04-17 10:27:58.680593] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:25.601 [2024-04-17 10:27:58.681295] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:25.601 [2024-04-17 10:27:58.681324] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:25.601 10:27:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.601 10:27:58 -- host/discovery.sh@117 -- # sleep 1 00:31:25.601 [2024-04-17 10:27:58.767588] bdev_nvme.c:6677:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:25.859 [2024-04-17 10:27:59.077959] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:25.859 [2024-04-17 10:27:59.077981] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:25.859 [2024-04-17 10:27:59.077988] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:26.426 10:27:59 -- host/discovery.sh@118 -- # get_subsystem_names 00:31:26.426 10:27:59 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:26.426 10:27:59 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:26.426 10:27:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.426 10:27:59 -- host/discovery.sh@59 -- # sort 00:31:26.426 10:27:59 -- common/autotest_common.sh@10 -- # set +x 00:31:26.426 10:27:59 -- host/discovery.sh@59 -- # xargs 00:31:26.426 10:27:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.426 10:27:59 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:26.426 10:27:59 -- host/discovery.sh@119 -- # get_bdev_list 00:31:26.426 10:27:59 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:26.426 10:27:59 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:26.426 10:27:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.426 10:27:59 -- host/discovery.sh@55 -- # sort 00:31:26.426 10:27:59 -- common/autotest_common.sh@10 -- # set +x 00:31:26.426 10:27:59 -- host/discovery.sh@55 -- # xargs 00:31:26.685 10:27:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.685 10:27:59 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:26.685 10:27:59 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:31:26.685 10:27:59 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:26.685 10:27:59 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:26.685 10:27:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.685 10:27:59 -- host/discovery.sh@63 -- # sort -n 00:31:26.685 10:27:59 -- common/autotest_common.sh@10 -- # set +x 00:31:26.685 10:27:59 -- host/discovery.sh@63 -- # xargs 00:31:26.685 10:27:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.685 10:27:59 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:26.685 10:27:59 -- host/discovery.sh@121 -- # get_notification_count 00:31:26.685 10:27:59 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:26.685 10:27:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.685 10:27:59 -- common/autotest_common.sh@10 -- # set +x 00:31:26.685 10:27:59 -- host/discovery.sh@74 -- # jq '. | length' 00:31:26.685 10:27:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.685 10:27:59 -- host/discovery.sh@74 -- # notification_count=0 00:31:26.685 10:27:59 -- host/discovery.sh@75 -- # notify_id=2 00:31:26.685 10:27:59 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:31:26.685 10:27:59 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:26.685 10:27:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.685 10:27:59 -- common/autotest_common.sh@10 -- # set +x 00:31:26.685 [2024-04-17 10:27:59.892541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.685 [2024-04-17 10:27:59.892572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.685 [2024-04-17 10:27:59.892586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.685 [2024-04-17 10:27:59.892598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.685 [2024-04-17 10:27:59.892608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.685 [2024-04-17 10:27:59.892618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.685 [2024-04-17 10:27:59.892629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:26.685 [2024-04-17 10:27:59.892639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:26.685 [2024-04-17 10:27:59.892654] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3cc80 is same with the state(5) to be set 00:31:26.685 [2024-04-17 10:27:59.892827] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:26.685 [2024-04-17 10:27:59.892846] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:26.685 10:27:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.685 10:27:59 -- host/discovery.sh@127 -- # sleep 1 00:31:26.685 [2024-04-17 10:27:59.902548] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3cc80 (9): Bad file descriptor 00:31:26.685 [2024-04-17 10:27:59.912592] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:26.685 [2024-04-17 10:27:59.912894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.685 [2024-04-17 10:27:59.913155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.685 [2024-04-17 10:27:59.913172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3cc80 with addr=10.0.0.2, port=4420 00:31:26.685 [2024-04-17 10:27:59.913183] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3cc80 is same with the state(5) to be set 00:31:26.685 [2024-04-17 10:27:59.913200] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3cc80 (9): Bad file descriptor 00:31:26.685 [2024-04-17 10:27:59.913224] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:26.686 [2024-04-17 10:27:59.913235] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:26.686 [2024-04-17 10:27:59.913246] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:26.686 [2024-04-17 10:27:59.913261] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:26.686 [2024-04-17 10:27:59.922660] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:26.686 [2024-04-17 10:27:59.922913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.686 [2024-04-17 10:27:59.923196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.686 [2024-04-17 10:27:59.923211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3cc80 with addr=10.0.0.2, port=4420 00:31:26.686 [2024-04-17 10:27:59.923222] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3cc80 is same with the state(5) to be set 00:31:26.686 [2024-04-17 10:27:59.923237] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3cc80 (9): Bad file descriptor 00:31:26.686 [2024-04-17 10:27:59.923262] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:26.686 [2024-04-17 10:27:59.923272] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:26.686 [2024-04-17 10:27:59.923281] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:26.686 [2024-04-17 10:27:59.923296] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:26.686 [2024-04-17 10:27:59.932719] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:26.686 [2024-04-17 10:27:59.933008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.686 [2024-04-17 10:27:59.933286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.686 [2024-04-17 10:27:59.933301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3cc80 with addr=10.0.0.2, port=4420 00:31:26.686 [2024-04-17 10:27:59.933312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3cc80 is same with the state(5) to be set 00:31:26.686 [2024-04-17 10:27:59.933327] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3cc80 (9): Bad file descriptor 00:31:26.686 [2024-04-17 10:27:59.933349] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:26.686 [2024-04-17 10:27:59.933359] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:26.686 [2024-04-17 10:27:59.933369] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:26.686 [2024-04-17 10:27:59.933383] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:26.686 [2024-04-17 10:27:59.942780] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:26.686 [2024-04-17 10:27:59.943071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.686 [2024-04-17 10:27:59.943347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.686 [2024-04-17 10:27:59.943362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3cc80 with addr=10.0.0.2, port=4420 00:31:26.686 [2024-04-17 10:27:59.943374] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3cc80 is same with the state(5) to be set 00:31:26.686 [2024-04-17 10:27:59.943389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3cc80 (9): Bad file descriptor 00:31:26.686 [2024-04-17 10:27:59.943413] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:26.686 [2024-04-17 10:27:59.943423] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:26.686 [2024-04-17 10:27:59.943433] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:26.686 [2024-04-17 10:27:59.943447] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:26.686 [2024-04-17 10:27:59.952841] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:26.686 [2024-04-17 10:27:59.953150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.686 [2024-04-17 10:27:59.953346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.686 [2024-04-17 10:27:59.953365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3cc80 with addr=10.0.0.2, port=4420 00:31:26.686 [2024-04-17 10:27:59.953376] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3cc80 is same with the state(5) to be set 00:31:26.686 [2024-04-17 10:27:59.953391] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3cc80 (9): Bad file descriptor 00:31:26.686 [2024-04-17 10:27:59.953406] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:26.686 [2024-04-17 10:27:59.953415] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:26.686 [2024-04-17 10:27:59.953424] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:26.686 [2024-04-17 10:27:59.953438] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:26.686 [2024-04-17 10:27:59.962899] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:26.686 [2024-04-17 10:27:59.963183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.686 [2024-04-17 10:27:59.963455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.686 [2024-04-17 10:27:59.963471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3cc80 with addr=10.0.0.2, port=4420 00:31:26.686 [2024-04-17 10:27:59.963481] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3cc80 is same with the state(5) to be set 00:31:26.686 [2024-04-17 10:27:59.963496] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3cc80 (9): Bad file descriptor 00:31:26.686 [2024-04-17 10:27:59.963520] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:26.686 [2024-04-17 10:27:59.963531] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:26.686 [2024-04-17 10:27:59.963541] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:26.686 [2024-04-17 10:27:59.963555] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:26.686 [2024-04-17 10:27:59.972958] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:26.686 [2024-04-17 10:27:59.973167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.686 [2024-04-17 10:27:59.973422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.686 [2024-04-17 10:27:59.973438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3cc80 with addr=10.0.0.2, port=4420 00:31:26.686 [2024-04-17 10:27:59.973448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3cc80 is same with the state(5) to be set 00:31:26.686 [2024-04-17 10:27:59.973463] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3cc80 (9): Bad file descriptor 00:31:26.686 [2024-04-17 10:27:59.973477] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:26.686 [2024-04-17 10:27:59.973485] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:26.686 [2024-04-17 10:27:59.973495] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:26.686 [2024-04-17 10:27:59.973509] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:26.686 [2024-04-17 10:27:59.983018] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:26.686 [2024-04-17 10:27:59.983344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.686 [2024-04-17 10:27:59.983535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.686 [2024-04-17 10:27:59.983550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3cc80 with addr=10.0.0.2, port=4420 00:31:26.686 [2024-04-17 10:27:59.983566] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3cc80 is same with the state(5) to be set 00:31:26.686 [2024-04-17 10:27:59.983580] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3cc80 (9): Bad file descriptor 00:31:26.686 [2024-04-17 10:27:59.983604] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:26.686 [2024-04-17 10:27:59.983614] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:26.686 [2024-04-17 10:27:59.983625] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:26.686 [2024-04-17 10:27:59.983638] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:26.686 [2024-04-17 10:27:59.993081] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:26.686 [2024-04-17 10:27:59.993362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.686 [2024-04-17 10:27:59.993496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.686 [2024-04-17 10:27:59.993511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3cc80 with addr=10.0.0.2, port=4420 00:31:26.686 [2024-04-17 10:27:59.993521] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3cc80 is same with the state(5) to be set 00:31:26.686 [2024-04-17 10:27:59.993536] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3cc80 (9): Bad file descriptor 00:31:26.686 [2024-04-17 10:27:59.993550] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:26.686 [2024-04-17 10:27:59.993558] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:26.686 [2024-04-17 10:27:59.993568] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:26.686 [2024-04-17 10:27:59.993581] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:26.686 [2024-04-17 10:28:00.003142] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:26.686 [2024-04-17 10:28:00.003457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.686 [2024-04-17 10:28:00.003605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.686 [2024-04-17 10:28:00.003622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3cc80 with addr=10.0.0.2, port=4420 00:31:26.686 [2024-04-17 10:28:00.003633] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3cc80 is same with the state(5) to be set 00:31:26.686 [2024-04-17 10:28:00.003658] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3cc80 (9): Bad file descriptor 00:31:26.686 [2024-04-17 10:28:00.003675] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:26.687 [2024-04-17 10:28:00.003684] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:26.687 [2024-04-17 10:28:00.003694] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:26.687 [2024-04-17 10:28:00.003720] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:26.687 [2024-04-17 10:28:00.013216] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:26.687 [2024-04-17 10:28:00.013532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.687 [2024-04-17 10:28:00.013786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:26.687 [2024-04-17 10:28:00.013803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3cc80 with addr=10.0.0.2, port=4420 00:31:26.687 [2024-04-17 10:28:00.013814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3cc80 is same with the state(5) to be set 00:31:26.687 [2024-04-17 10:28:00.013834] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3cc80 (9): Bad file descriptor 00:31:26.687 [2024-04-17 10:28:00.013857] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:26.687 [2024-04-17 10:28:00.013868] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:26.687 [2024-04-17 10:28:00.013878] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:26.687 [2024-04-17 10:28:00.013892] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:26.945 [2024-04-17 10:28:00.019879] bdev_nvme.c:6540:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:26.945 [2024-04-17 10:28:00.019901] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:27.880 10:28:00 -- host/discovery.sh@128 -- # get_subsystem_names 00:31:27.880 10:28:00 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:27.880 10:28:00 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:27.880 10:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:27.880 10:28:00 -- common/autotest_common.sh@10 -- # set +x 00:31:27.880 10:28:00 -- host/discovery.sh@59 -- # sort 00:31:27.880 10:28:00 -- host/discovery.sh@59 -- # xargs 00:31:27.880 10:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:27.880 10:28:00 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:27.880 10:28:00 -- host/discovery.sh@129 -- # get_bdev_list 00:31:27.880 10:28:00 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:27.880 10:28:00 -- host/discovery.sh@55 -- # xargs 00:31:27.880 10:28:00 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:27.880 10:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:27.880 10:28:00 -- host/discovery.sh@55 -- # sort 00:31:27.880 10:28:00 -- common/autotest_common.sh@10 -- # set +x 00:31:27.880 10:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:27.880 10:28:01 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:27.880 10:28:01 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:31:27.880 10:28:01 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:27.881 10:28:01 -- host/discovery.sh@63 -- # xargs 00:31:27.881 10:28:01 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:27.881 10:28:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:27.881 10:28:01 -- common/autotest_common.sh@10 -- # set +x 00:31:27.881 10:28:01 -- host/discovery.sh@63 -- # sort -n 00:31:27.881 10:28:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:27.881 10:28:01 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:31:27.881 10:28:01 -- host/discovery.sh@131 -- # get_notification_count 00:31:27.881 10:28:01 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:27.881 10:28:01 -- host/discovery.sh@74 -- # jq '. | length' 00:31:27.881 10:28:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:27.881 10:28:01 -- common/autotest_common.sh@10 -- # set +x 00:31:27.881 10:28:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:27.881 10:28:01 -- host/discovery.sh@74 -- # notification_count=0 00:31:27.881 10:28:01 -- host/discovery.sh@75 -- # notify_id=2 00:31:27.881 10:28:01 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:31:27.881 10:28:01 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:27.881 10:28:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:27.881 10:28:01 -- common/autotest_common.sh@10 -- # set +x 00:31:27.881 10:28:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:27.881 10:28:01 -- host/discovery.sh@135 -- # sleep 1 00:31:28.816 10:28:02 -- host/discovery.sh@136 -- # get_subsystem_names 00:31:28.816 10:28:02 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:28.816 10:28:02 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:28.816 10:28:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:28.816 10:28:02 -- host/discovery.sh@59 -- # sort 00:31:28.816 10:28:02 -- common/autotest_common.sh@10 -- # set +x 00:31:28.816 10:28:02 -- host/discovery.sh@59 -- # xargs 00:31:28.816 10:28:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:29.074 10:28:02 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:31:29.074 10:28:02 -- host/discovery.sh@137 -- # get_bdev_list 00:31:29.074 10:28:02 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.074 10:28:02 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:29.074 10:28:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:29.074 10:28:02 -- host/discovery.sh@55 -- # sort 00:31:29.074 10:28:02 -- common/autotest_common.sh@10 -- # set +x 00:31:29.074 10:28:02 -- host/discovery.sh@55 -- # xargs 00:31:29.074 10:28:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:29.075 10:28:02 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:31:29.075 10:28:02 -- host/discovery.sh@138 -- # get_notification_count 00:31:29.075 10:28:02 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:29.075 10:28:02 -- host/discovery.sh@74 -- # jq '. | length' 00:31:29.075 10:28:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:29.075 10:28:02 -- common/autotest_common.sh@10 -- # set +x 00:31:29.075 10:28:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:29.075 10:28:02 -- host/discovery.sh@74 -- # notification_count=2 00:31:29.075 10:28:02 -- host/discovery.sh@75 -- # notify_id=4 00:31:29.075 10:28:02 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:31:29.075 10:28:02 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:29.075 10:28:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:29.075 10:28:02 -- common/autotest_common.sh@10 -- # set +x 00:31:30.010 [2024-04-17 10:28:03.333254] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:30.010 [2024-04-17 10:28:03.333276] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:30.010 [2024-04-17 10:28:03.333291] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:30.269 [2024-04-17 10:28:03.461771] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:30.528 [2024-04-17 10:28:03.771624] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:30.528 [2024-04-17 10:28:03.771667] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:30.528 10:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.528 10:28:03 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:30.528 10:28:03 -- common/autotest_common.sh@640 -- # local es=0 00:31:30.528 10:28:03 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:30.528 10:28:03 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:31:30.528 10:28:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:30.528 10:28:03 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:31:30.528 10:28:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:30.528 10:28:03 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:30.528 10:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.528 10:28:03 -- common/autotest_common.sh@10 -- # set +x 00:31:30.528 request: 00:31:30.528 { 00:31:30.528 "name": "nvme", 00:31:30.528 "trtype": "tcp", 00:31:30.528 "traddr": "10.0.0.2", 00:31:30.528 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:30.528 "adrfam": "ipv4", 00:31:30.528 "trsvcid": "8009", 00:31:30.528 "wait_for_attach": true, 00:31:30.528 "method": "bdev_nvme_start_discovery", 00:31:30.528 "req_id": 1 00:31:30.528 } 00:31:30.528 Got JSON-RPC error response 00:31:30.528 response: 00:31:30.528 { 00:31:30.528 "code": -17, 00:31:30.528 "message": "File exists" 00:31:30.528 } 00:31:30.528 10:28:03 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:31:30.528 10:28:03 -- common/autotest_common.sh@643 -- # es=1 00:31:30.528 10:28:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:31:30.528 10:28:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:31:30.528 10:28:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:31:30.528 10:28:03 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:31:30.528 10:28:03 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:30.528 10:28:03 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:30.528 10:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.528 10:28:03 -- host/discovery.sh@67 -- # sort 00:31:30.528 10:28:03 -- common/autotest_common.sh@10 -- # set +x 00:31:30.528 10:28:03 -- host/discovery.sh@67 -- # xargs 00:31:30.528 10:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.528 10:28:03 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:31:30.528 10:28:03 -- host/discovery.sh@147 -- # get_bdev_list 00:31:30.528 10:28:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:30.528 10:28:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:30.528 10:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.528 10:28:03 -- host/discovery.sh@55 -- # sort 00:31:30.528 10:28:03 -- common/autotest_common.sh@10 -- # set +x 00:31:30.528 10:28:03 -- host/discovery.sh@55 -- # xargs 00:31:30.787 10:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.787 10:28:03 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:30.787 10:28:03 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:30.787 10:28:03 -- common/autotest_common.sh@640 -- # local es=0 00:31:30.787 10:28:03 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:30.787 10:28:03 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:31:30.787 10:28:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:30.787 10:28:03 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:31:30.787 10:28:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:30.787 10:28:03 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:30.787 10:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.787 10:28:03 -- common/autotest_common.sh@10 -- # set +x 00:31:30.787 request: 00:31:30.787 { 00:31:30.787 "name": "nvme_second", 00:31:30.787 "trtype": "tcp", 00:31:30.787 "traddr": "10.0.0.2", 00:31:30.787 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:30.787 "adrfam": "ipv4", 00:31:30.787 "trsvcid": "8009", 00:31:30.787 "wait_for_attach": true, 00:31:30.787 "method": "bdev_nvme_start_discovery", 00:31:30.787 "req_id": 1 00:31:30.787 } 00:31:30.787 Got JSON-RPC error response 00:31:30.787 response: 00:31:30.787 { 00:31:30.787 "code": -17, 00:31:30.787 "message": "File exists" 00:31:30.787 } 00:31:30.787 10:28:03 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:31:30.787 10:28:03 -- common/autotest_common.sh@643 -- # es=1 00:31:30.787 10:28:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:31:30.787 10:28:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:31:30.787 10:28:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:31:30.787 10:28:03 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:31:30.787 10:28:03 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:30.787 10:28:03 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:30.787 10:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.787 10:28:03 -- common/autotest_common.sh@10 -- # set +x 00:31:30.787 10:28:03 -- host/discovery.sh@67 -- # sort 00:31:30.787 10:28:03 -- host/discovery.sh@67 -- # xargs 00:31:30.787 10:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.787 10:28:03 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:31:30.787 10:28:03 -- host/discovery.sh@153 -- # get_bdev_list 00:31:30.787 10:28:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:30.787 10:28:03 -- host/discovery.sh@55 -- # xargs 00:31:30.787 10:28:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:30.787 10:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.787 10:28:03 -- host/discovery.sh@55 -- # sort 00:31:30.787 10:28:03 -- common/autotest_common.sh@10 -- # set +x 00:31:30.787 10:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.787 10:28:04 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:30.787 10:28:04 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:30.787 10:28:04 -- common/autotest_common.sh@640 -- # local es=0 00:31:30.787 10:28:04 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:30.787 10:28:04 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:31:30.787 10:28:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:30.787 10:28:04 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:31:30.787 10:28:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:30.787 10:28:04 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:30.787 10:28:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.787 10:28:04 -- common/autotest_common.sh@10 -- # set +x 00:31:31.723 [2024-04-17 10:28:05.027291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-04-17 10:28:05.027617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.723 [2024-04-17 10:28:05.027634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c55a90 with addr=10.0.0.2, port=8010 00:31:31.723 [2024-04-17 10:28:05.027657] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:31.723 [2024-04-17 10:28:05.027667] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:31.723 [2024-04-17 10:28:05.027676] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:33.100 [2024-04-17 10:28:06.029658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.101 [2024-04-17 10:28:06.029825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.101 [2024-04-17 10:28:06.029841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c55a90 with addr=10.0.0.2, port=8010 00:31:33.101 [2024-04-17 10:28:06.029855] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:33.101 [2024-04-17 10:28:06.029864] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:33.101 [2024-04-17 10:28:06.029874] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:34.036 [2024-04-17 10:28:07.031804] bdev_nvme.c:6796:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:34.036 request: 00:31:34.036 { 00:31:34.036 "name": "nvme_second", 00:31:34.036 "trtype": "tcp", 00:31:34.036 "traddr": "10.0.0.2", 00:31:34.036 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:34.036 "adrfam": "ipv4", 00:31:34.036 "trsvcid": "8010", 00:31:34.036 "attach_timeout_ms": 3000, 00:31:34.036 "method": "bdev_nvme_start_discovery", 00:31:34.036 "req_id": 1 00:31:34.036 } 00:31:34.036 Got JSON-RPC error response 00:31:34.036 response: 00:31:34.036 { 00:31:34.036 "code": -110, 00:31:34.036 "message": "Connection timed out" 00:31:34.036 } 00:31:34.036 10:28:07 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:31:34.036 10:28:07 -- common/autotest_common.sh@643 -- # es=1 00:31:34.036 10:28:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:31:34.036 10:28:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:31:34.036 10:28:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:31:34.036 10:28:07 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:31:34.036 10:28:07 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:34.036 10:28:07 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:34.036 10:28:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.036 10:28:07 -- host/discovery.sh@67 -- # sort 00:31:34.036 10:28:07 -- common/autotest_common.sh@10 -- # set +x 00:31:34.036 10:28:07 -- host/discovery.sh@67 -- # xargs 00:31:34.036 10:28:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.036 10:28:07 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:31:34.036 10:28:07 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:31:34.036 10:28:07 -- host/discovery.sh@162 -- # kill 3627369 00:31:34.036 10:28:07 -- host/discovery.sh@163 -- # nvmftestfini 00:31:34.036 10:28:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:34.036 10:28:07 -- nvmf/common.sh@116 -- # sync 00:31:34.036 10:28:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:34.036 10:28:07 -- nvmf/common.sh@119 -- # set +e 00:31:34.036 10:28:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:34.036 10:28:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:34.036 rmmod nvme_tcp 00:31:34.036 rmmod nvme_fabrics 00:31:34.036 rmmod nvme_keyring 00:31:34.036 10:28:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:34.036 10:28:07 -- nvmf/common.sh@123 -- # set -e 00:31:34.036 10:28:07 -- nvmf/common.sh@124 -- # return 0 00:31:34.036 10:28:07 -- nvmf/common.sh@477 -- # '[' -n 3627086 ']' 00:31:34.036 10:28:07 -- nvmf/common.sh@478 -- # killprocess 3627086 00:31:34.036 10:28:07 -- common/autotest_common.sh@926 -- # '[' -z 3627086 ']' 00:31:34.036 10:28:07 -- common/autotest_common.sh@930 -- # kill -0 3627086 00:31:34.036 10:28:07 -- common/autotest_common.sh@931 -- # uname 00:31:34.036 10:28:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:34.036 10:28:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3627086 00:31:34.036 10:28:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:34.036 10:28:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:34.036 10:28:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3627086' 00:31:34.036 killing process with pid 3627086 00:31:34.036 10:28:07 -- common/autotest_common.sh@945 -- # kill 3627086 00:31:34.036 10:28:07 -- common/autotest_common.sh@950 -- # wait 3627086 00:31:34.295 10:28:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:34.295 10:28:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:34.295 10:28:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:34.295 10:28:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:34.295 10:28:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:34.295 10:28:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:34.295 10:28:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:34.295 10:28:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.198 10:28:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:36.198 00:31:36.198 real 0m21.584s 00:31:36.198 user 0m29.222s 00:31:36.198 sys 0m5.867s 00:31:36.198 10:28:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:36.198 10:28:09 -- common/autotest_common.sh@10 -- # set +x 00:31:36.198 ************************************ 00:31:36.198 END TEST nvmf_discovery 00:31:36.198 ************************************ 00:31:36.457 10:28:09 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:36.457 10:28:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:36.457 10:28:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:36.457 10:28:09 -- common/autotest_common.sh@10 -- # set +x 00:31:36.457 ************************************ 00:31:36.457 START TEST nvmf_discovery_remove_ifc 00:31:36.457 ************************************ 00:31:36.457 10:28:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:36.457 * Looking for test storage... 00:31:36.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:36.457 10:28:09 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:36.457 10:28:09 -- nvmf/common.sh@7 -- # uname -s 00:31:36.457 10:28:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:36.457 10:28:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:36.457 10:28:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:36.457 10:28:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:36.457 10:28:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:36.457 10:28:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:36.457 10:28:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:36.457 10:28:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:36.457 10:28:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:36.457 10:28:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:36.457 10:28:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:31:36.457 10:28:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:31:36.457 10:28:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:36.457 10:28:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:36.457 10:28:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:36.458 10:28:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:36.458 10:28:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:36.458 10:28:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:36.458 10:28:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:36.458 10:28:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.458 10:28:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.458 10:28:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.458 10:28:09 -- paths/export.sh@5 -- # export PATH 00:31:36.458 10:28:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.458 10:28:09 -- nvmf/common.sh@46 -- # : 0 00:31:36.458 10:28:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:36.458 10:28:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:36.458 10:28:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:36.458 10:28:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:36.458 10:28:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:36.458 10:28:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:36.458 10:28:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:36.458 10:28:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:36.458 10:28:09 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:36.458 10:28:09 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:36.458 10:28:09 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:36.458 10:28:09 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:36.458 10:28:09 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:36.458 10:28:09 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:36.458 10:28:09 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:36.458 10:28:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:36.458 10:28:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:36.458 10:28:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:36.458 10:28:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:36.458 10:28:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:36.458 10:28:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:36.458 10:28:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:36.458 10:28:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.458 10:28:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:36.458 10:28:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:36.458 10:28:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:36.458 10:28:09 -- common/autotest_common.sh@10 -- # set +x 00:31:43.030 10:28:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:43.030 10:28:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:43.030 10:28:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:43.030 10:28:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:43.030 10:28:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:43.030 10:28:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:43.030 10:28:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:43.030 10:28:15 -- nvmf/common.sh@294 -- # net_devs=() 00:31:43.030 10:28:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:43.030 10:28:15 -- nvmf/common.sh@295 -- # e810=() 00:31:43.030 10:28:15 -- nvmf/common.sh@295 -- # local -ga e810 00:31:43.030 10:28:15 -- nvmf/common.sh@296 -- # x722=() 00:31:43.030 10:28:15 -- nvmf/common.sh@296 -- # local -ga x722 00:31:43.030 10:28:15 -- nvmf/common.sh@297 -- # mlx=() 00:31:43.030 10:28:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:43.030 10:28:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:43.030 10:28:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:43.030 10:28:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:43.030 10:28:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:43.030 10:28:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:43.030 10:28:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:43.030 10:28:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:43.030 10:28:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:43.030 10:28:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:43.030 10:28:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:43.030 10:28:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:43.030 10:28:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:43.030 10:28:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:43.030 10:28:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:43.030 10:28:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:43.030 10:28:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:43.030 10:28:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:43.030 10:28:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:43.030 10:28:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:43.030 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:43.030 10:28:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:43.030 10:28:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:43.030 10:28:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.030 10:28:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.030 10:28:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:43.030 10:28:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:43.030 10:28:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:43.030 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:43.030 10:28:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:43.030 10:28:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:43.030 10:28:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.030 10:28:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.030 10:28:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:43.030 10:28:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:43.030 10:28:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:43.030 10:28:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:43.030 10:28:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:43.030 10:28:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.030 10:28:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:43.030 10:28:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.030 10:28:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:43.030 Found net devices under 0000:af:00.0: cvl_0_0 00:31:43.030 10:28:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.030 10:28:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:43.030 10:28:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.030 10:28:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:43.030 10:28:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.030 10:28:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:43.030 Found net devices under 0000:af:00.1: cvl_0_1 00:31:43.030 10:28:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.030 10:28:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:43.030 10:28:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:43.030 10:28:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:43.030 10:28:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:43.030 10:28:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:43.030 10:28:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:43.030 10:28:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:43.030 10:28:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:43.030 10:28:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:43.030 10:28:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:43.030 10:28:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:43.030 10:28:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:43.030 10:28:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:43.030 10:28:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:43.030 10:28:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:43.030 10:28:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:43.030 10:28:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:43.030 10:28:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:43.030 10:28:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:43.030 10:28:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:43.030 10:28:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:43.030 10:28:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:43.030 10:28:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:43.030 10:28:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:43.030 10:28:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:43.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:43.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:31:43.030 00:31:43.030 --- 10.0.0.2 ping statistics --- 00:31:43.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.030 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:31:43.030 10:28:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:43.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:43.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:31:43.030 00:31:43.030 --- 10.0.0.1 ping statistics --- 00:31:43.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.030 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:31:43.030 10:28:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:43.030 10:28:15 -- nvmf/common.sh@410 -- # return 0 00:31:43.030 10:28:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:43.030 10:28:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:43.030 10:28:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:43.030 10:28:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:43.030 10:28:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:43.030 10:28:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:43.030 10:28:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:43.030 10:28:15 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:43.030 10:28:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:43.030 10:28:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:43.030 10:28:15 -- common/autotest_common.sh@10 -- # set +x 00:31:43.030 10:28:15 -- nvmf/common.sh@469 -- # nvmfpid=3633259 00:31:43.030 10:28:15 -- nvmf/common.sh@470 -- # waitforlisten 3633259 00:31:43.030 10:28:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:43.030 10:28:15 -- common/autotest_common.sh@819 -- # '[' -z 3633259 ']' 00:31:43.030 10:28:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:43.030 10:28:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:43.030 10:28:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:43.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:43.030 10:28:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:43.030 10:28:15 -- common/autotest_common.sh@10 -- # set +x 00:31:43.030 [2024-04-17 10:28:15.461045] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:31:43.030 [2024-04-17 10:28:15.461101] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:43.030 EAL: No free 2048 kB hugepages reported on node 1 00:31:43.030 [2024-04-17 10:28:15.541779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:43.030 [2024-04-17 10:28:15.629988] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:43.030 [2024-04-17 10:28:15.630130] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:43.030 [2024-04-17 10:28:15.630142] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:43.030 [2024-04-17 10:28:15.630152] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:43.030 [2024-04-17 10:28:15.630176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:43.289 10:28:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:43.289 10:28:16 -- common/autotest_common.sh@852 -- # return 0 00:31:43.289 10:28:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:43.289 10:28:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:43.289 10:28:16 -- common/autotest_common.sh@10 -- # set +x 00:31:43.289 10:28:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:43.289 10:28:16 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:43.289 10:28:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:43.289 10:28:16 -- common/autotest_common.sh@10 -- # set +x 00:31:43.289 [2024-04-17 10:28:16.435242] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:43.289 [2024-04-17 10:28:16.443415] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:43.289 null0 00:31:43.289 [2024-04-17 10:28:16.475418] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:43.289 10:28:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:43.289 10:28:16 -- host/discovery_remove_ifc.sh@59 -- # hostpid=3633420 00:31:43.289 10:28:16 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3633420 /tmp/host.sock 00:31:43.289 10:28:16 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:43.289 10:28:16 -- common/autotest_common.sh@819 -- # '[' -z 3633420 ']' 00:31:43.289 10:28:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:31:43.289 10:28:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:43.289 10:28:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:43.289 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:43.289 10:28:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:43.289 10:28:16 -- common/autotest_common.sh@10 -- # set +x 00:31:43.289 [2024-04-17 10:28:16.544993] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:31:43.289 [2024-04-17 10:28:16.545049] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3633420 ] 00:31:43.289 EAL: No free 2048 kB hugepages reported on node 1 00:31:43.548 [2024-04-17 10:28:16.626932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:43.548 [2024-04-17 10:28:16.713502] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:43.548 [2024-04-17 10:28:16.713660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:43.548 10:28:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:43.548 10:28:16 -- common/autotest_common.sh@852 -- # return 0 00:31:43.548 10:28:16 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:43.548 10:28:16 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:43.548 10:28:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:43.548 10:28:16 -- common/autotest_common.sh@10 -- # set +x 00:31:43.548 10:28:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:43.548 10:28:16 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:43.548 10:28:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:43.548 10:28:16 -- common/autotest_common.sh@10 -- # set +x 00:31:43.548 10:28:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:43.548 10:28:16 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:43.548 10:28:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:43.548 10:28:16 -- common/autotest_common.sh@10 -- # set +x 00:31:44.919 [2024-04-17 10:28:17.883351] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:44.919 [2024-04-17 10:28:17.883375] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:44.919 [2024-04-17 10:28:17.883391] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:44.919 [2024-04-17 10:28:18.011875] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:44.919 [2024-04-17 10:28:18.239085] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:44.919 [2024-04-17 10:28:18.239131] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:44.919 [2024-04-17 10:28:18.239159] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:44.919 [2024-04-17 10:28:18.239176] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:44.919 [2024-04-17 10:28:18.239199] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:44.919 10:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:44.919 10:28:18 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:44.919 [2024-04-17 10:28:18.241767] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1a28ad0 was disconnected and freed. delete nvme_qpair. 00:31:44.919 10:28:18 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:44.919 10:28:18 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:44.919 10:28:18 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:44.919 10:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:44.919 10:28:18 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:44.919 10:28:18 -- common/autotest_common.sh@10 -- # set +x 00:31:44.919 10:28:18 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:45.177 10:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:45.177 10:28:18 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:45.177 10:28:18 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:45.177 10:28:18 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:45.177 10:28:18 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:45.177 10:28:18 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:45.177 10:28:18 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:45.177 10:28:18 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:45.177 10:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:45.177 10:28:18 -- common/autotest_common.sh@10 -- # set +x 00:31:45.177 10:28:18 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:45.177 10:28:18 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:45.177 10:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:45.177 10:28:18 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:45.177 10:28:18 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:46.552 10:28:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:46.552 10:28:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:46.552 10:28:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:46.552 10:28:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:46.552 10:28:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:46.552 10:28:19 -- common/autotest_common.sh@10 -- # set +x 00:31:46.552 10:28:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:46.552 10:28:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:46.552 10:28:19 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:46.552 10:28:19 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:47.485 10:28:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:47.486 10:28:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:47.486 10:28:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:47.486 10:28:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:47.486 10:28:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:47.486 10:28:20 -- common/autotest_common.sh@10 -- # set +x 00:31:47.486 10:28:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:47.486 10:28:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:47.486 10:28:20 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:47.486 10:28:20 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:48.419 10:28:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:48.419 10:28:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:48.419 10:28:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:48.419 10:28:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:48.419 10:28:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:48.419 10:28:21 -- common/autotest_common.sh@10 -- # set +x 00:31:48.419 10:28:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:48.419 10:28:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:48.419 10:28:21 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:48.419 10:28:21 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:49.355 10:28:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:49.355 10:28:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:49.355 10:28:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:49.355 10:28:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:49.355 10:28:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.355 10:28:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:49.355 10:28:22 -- common/autotest_common.sh@10 -- # set +x 00:31:49.355 10:28:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.355 10:28:22 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:49.355 10:28:22 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:50.729 [2024-04-17 10:28:23.680003] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:50.729 [2024-04-17 10:28:23.680053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:50.729 [2024-04-17 10:28:23.680068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.729 [2024-04-17 10:28:23.680081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:50.729 [2024-04-17 10:28:23.680091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.729 [2024-04-17 10:28:23.680102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:50.729 [2024-04-17 10:28:23.680112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.729 [2024-04-17 10:28:23.680123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:50.729 [2024-04-17 10:28:23.680132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.729 [2024-04-17 10:28:23.680143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:50.729 [2024-04-17 10:28:23.680153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:50.729 [2024-04-17 10:28:23.680163] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19efdf0 is same with the state(5) to be set 00:31:50.729 10:28:23 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:50.729 10:28:23 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:50.729 10:28:23 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:50.730 10:28:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:50.730 10:28:23 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:50.730 10:28:23 -- common/autotest_common.sh@10 -- # set +x 00:31:50.730 10:28:23 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:50.730 [2024-04-17 10:28:23.690022] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19efdf0 (9): Bad file descriptor 00:31:50.730 [2024-04-17 10:28:23.700068] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:50.730 10:28:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:50.730 10:28:23 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:50.730 10:28:23 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:51.694 [2024-04-17 10:28:24.732690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:51.694 10:28:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:51.694 10:28:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:51.694 10:28:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:51.694 10:28:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:51.694 10:28:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:51.694 10:28:24 -- common/autotest_common.sh@10 -- # set +x 00:31:51.694 10:28:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:52.672 [2024-04-17 10:28:25.756701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:52.672 [2024-04-17 10:28:25.756777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19efdf0 with addr=10.0.0.2, port=4420 00:31:52.672 [2024-04-17 10:28:25.756807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19efdf0 is same with the state(5) to be set 00:31:52.672 [2024-04-17 10:28:25.757620] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19efdf0 (9): Bad file descriptor 00:31:52.672 [2024-04-17 10:28:25.757703] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:52.672 [2024-04-17 10:28:25.757752] bdev_nvme.c:6504:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:52.672 [2024-04-17 10:28:25.757798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:52.672 [2024-04-17 10:28:25.757828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:52.672 [2024-04-17 10:28:25.757856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:52.672 [2024-04-17 10:28:25.757878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:52.672 [2024-04-17 10:28:25.757901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:52.672 [2024-04-17 10:28:25.757924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:52.672 [2024-04-17 10:28:25.757947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:52.672 [2024-04-17 10:28:25.757969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:52.673 [2024-04-17 10:28:25.757993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:52.673 [2024-04-17 10:28:25.758015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:52.673 [2024-04-17 10:28:25.758035] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:52.673 [2024-04-17 10:28:25.758089] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ef2e0 (9): Bad file descriptor 00:31:52.673 [2024-04-17 10:28:25.759092] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:52.673 [2024-04-17 10:28:25.759131] nvme_ctrlr.c:1135:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:52.673 10:28:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:52.673 10:28:25 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:52.673 10:28:25 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:53.607 10:28:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:53.607 10:28:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:53.607 10:28:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:53.607 10:28:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:53.607 10:28:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:53.607 10:28:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:53.607 10:28:26 -- common/autotest_common.sh@10 -- # set +x 00:31:53.607 10:28:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:53.607 10:28:26 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:53.607 10:28:26 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:53.607 10:28:26 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:53.607 10:28:26 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:53.607 10:28:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:53.607 10:28:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:53.607 10:28:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:53.607 10:28:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:53.607 10:28:26 -- common/autotest_common.sh@10 -- # set +x 00:31:53.607 10:28:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:53.607 10:28:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:53.607 10:28:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:53.866 10:28:26 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:53.866 10:28:26 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:54.802 [2024-04-17 10:28:27.810382] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:54.802 [2024-04-17 10:28:27.810403] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:54.802 [2024-04-17 10:28:27.810420] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:54.802 [2024-04-17 10:28:27.938881] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:54.802 10:28:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:54.802 10:28:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:54.802 10:28:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:54.802 10:28:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:54.802 10:28:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.802 10:28:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:54.802 10:28:27 -- common/autotest_common.sh@10 -- # set +x 00:31:54.802 10:28:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.802 10:28:28 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:54.802 10:28:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:55.060 [2024-04-17 10:28:28.162275] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:55.060 [2024-04-17 10:28:28.162319] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:55.060 [2024-04-17 10:28:28.162343] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:55.060 [2024-04-17 10:28:28.162361] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:55.060 [2024-04-17 10:28:28.162371] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:55.060 [2024-04-17 10:28:28.168148] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x19fdc00 was disconnected and freed. delete nvme_qpair. 00:31:55.997 10:28:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:55.997 10:28:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:55.997 10:28:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:55.997 10:28:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:55.997 10:28:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:55.997 10:28:29 -- common/autotest_common.sh@10 -- # set +x 00:31:55.997 10:28:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:55.997 10:28:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:55.997 10:28:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:55.997 10:28:29 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:55.997 10:28:29 -- host/discovery_remove_ifc.sh@90 -- # killprocess 3633420 00:31:55.997 10:28:29 -- common/autotest_common.sh@926 -- # '[' -z 3633420 ']' 00:31:55.997 10:28:29 -- common/autotest_common.sh@930 -- # kill -0 3633420 00:31:55.997 10:28:29 -- common/autotest_common.sh@931 -- # uname 00:31:55.997 10:28:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:55.997 10:28:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3633420 00:31:55.997 10:28:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:55.997 10:28:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:55.997 10:28:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3633420' 00:31:55.997 killing process with pid 3633420 00:31:55.997 10:28:29 -- common/autotest_common.sh@945 -- # kill 3633420 00:31:55.997 10:28:29 -- common/autotest_common.sh@950 -- # wait 3633420 00:31:56.256 10:28:29 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:56.256 10:28:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:56.256 10:28:29 -- nvmf/common.sh@116 -- # sync 00:31:56.256 10:28:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:56.256 10:28:29 -- nvmf/common.sh@119 -- # set +e 00:31:56.256 10:28:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:56.256 10:28:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:56.256 rmmod nvme_tcp 00:31:56.256 rmmod nvme_fabrics 00:31:56.256 rmmod nvme_keyring 00:31:56.256 10:28:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:56.256 10:28:29 -- nvmf/common.sh@123 -- # set -e 00:31:56.256 10:28:29 -- nvmf/common.sh@124 -- # return 0 00:31:56.256 10:28:29 -- nvmf/common.sh@477 -- # '[' -n 3633259 ']' 00:31:56.256 10:28:29 -- nvmf/common.sh@478 -- # killprocess 3633259 00:31:56.256 10:28:29 -- common/autotest_common.sh@926 -- # '[' -z 3633259 ']' 00:31:56.256 10:28:29 -- common/autotest_common.sh@930 -- # kill -0 3633259 00:31:56.256 10:28:29 -- common/autotest_common.sh@931 -- # uname 00:31:56.256 10:28:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:56.256 10:28:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3633259 00:31:56.256 10:28:29 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:56.256 10:28:29 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:56.256 10:28:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3633259' 00:31:56.256 killing process with pid 3633259 00:31:56.256 10:28:29 -- common/autotest_common.sh@945 -- # kill 3633259 00:31:56.256 10:28:29 -- common/autotest_common.sh@950 -- # wait 3633259 00:31:56.515 10:28:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:56.515 10:28:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:56.515 10:28:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:56.515 10:28:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:56.515 10:28:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:56.515 10:28:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:56.515 10:28:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:56.515 10:28:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.419 10:28:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:58.679 00:31:58.679 real 0m22.210s 00:31:58.679 user 0m27.120s 00:31:58.679 sys 0m5.673s 00:31:58.679 10:28:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:58.679 10:28:31 -- common/autotest_common.sh@10 -- # set +x 00:31:58.679 ************************************ 00:31:58.679 END TEST nvmf_discovery_remove_ifc 00:31:58.679 ************************************ 00:31:58.679 10:28:31 -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:31:58.679 10:28:31 -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:58.679 10:28:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:58.679 10:28:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:58.679 10:28:31 -- common/autotest_common.sh@10 -- # set +x 00:31:58.679 ************************************ 00:31:58.679 START TEST nvmf_digest 00:31:58.679 ************************************ 00:31:58.679 10:28:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:58.679 * Looking for test storage... 00:31:58.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:58.679 10:28:31 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:58.679 10:28:31 -- nvmf/common.sh@7 -- # uname -s 00:31:58.679 10:28:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:58.679 10:28:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:58.679 10:28:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:58.679 10:28:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:58.679 10:28:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:58.679 10:28:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:58.679 10:28:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:58.679 10:28:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:58.679 10:28:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:58.679 10:28:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:58.679 10:28:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:31:58.679 10:28:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:31:58.679 10:28:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:58.679 10:28:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:58.679 10:28:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:58.679 10:28:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:58.679 10:28:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:58.679 10:28:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:58.679 10:28:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:58.679 10:28:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.679 10:28:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.679 10:28:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.679 10:28:31 -- paths/export.sh@5 -- # export PATH 00:31:58.679 10:28:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.679 10:28:31 -- nvmf/common.sh@46 -- # : 0 00:31:58.679 10:28:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:58.679 10:28:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:58.679 10:28:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:58.679 10:28:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:58.679 10:28:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:58.679 10:28:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:58.679 10:28:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:58.679 10:28:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:58.679 10:28:31 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:58.679 10:28:31 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:31:58.679 10:28:31 -- host/digest.sh@16 -- # runtime=2 00:31:58.679 10:28:31 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:31:58.679 10:28:31 -- host/digest.sh@132 -- # nvmftestinit 00:31:58.679 10:28:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:58.679 10:28:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:58.679 10:28:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:58.679 10:28:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:58.679 10:28:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:58.680 10:28:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.680 10:28:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:58.680 10:28:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.680 10:28:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:58.680 10:28:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:58.680 10:28:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:58.680 10:28:31 -- common/autotest_common.sh@10 -- # set +x 00:32:05.244 10:28:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:05.244 10:28:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:05.244 10:28:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:05.244 10:28:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:05.244 10:28:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:05.244 10:28:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:05.244 10:28:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:05.244 10:28:37 -- nvmf/common.sh@294 -- # net_devs=() 00:32:05.244 10:28:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:05.244 10:28:37 -- nvmf/common.sh@295 -- # e810=() 00:32:05.244 10:28:37 -- nvmf/common.sh@295 -- # local -ga e810 00:32:05.244 10:28:37 -- nvmf/common.sh@296 -- # x722=() 00:32:05.244 10:28:37 -- nvmf/common.sh@296 -- # local -ga x722 00:32:05.244 10:28:37 -- nvmf/common.sh@297 -- # mlx=() 00:32:05.244 10:28:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:05.244 10:28:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:05.244 10:28:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:05.244 10:28:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:05.244 10:28:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:05.244 10:28:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:05.244 10:28:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:05.244 10:28:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:05.244 10:28:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:05.244 10:28:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:05.244 10:28:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:05.244 10:28:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:05.244 10:28:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:05.244 10:28:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:05.244 10:28:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:05.244 10:28:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:05.244 10:28:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:05.244 10:28:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:05.244 10:28:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:05.244 10:28:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:05.244 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:05.244 10:28:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:05.244 10:28:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:05.244 10:28:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.244 10:28:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.244 10:28:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:05.244 10:28:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:05.244 10:28:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:05.244 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:05.244 10:28:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:05.244 10:28:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:05.244 10:28:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.244 10:28:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.244 10:28:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:05.244 10:28:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:05.244 10:28:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:05.244 10:28:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:05.244 10:28:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:05.244 10:28:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.244 10:28:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:05.244 10:28:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.244 10:28:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:05.245 Found net devices under 0000:af:00.0: cvl_0_0 00:32:05.245 10:28:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.245 10:28:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:05.245 10:28:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.245 10:28:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:05.245 10:28:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.245 10:28:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:05.245 Found net devices under 0000:af:00.1: cvl_0_1 00:32:05.245 10:28:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.245 10:28:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:05.245 10:28:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:05.245 10:28:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:05.245 10:28:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:05.245 10:28:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:05.245 10:28:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:05.245 10:28:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:05.245 10:28:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:05.245 10:28:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:05.245 10:28:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:05.245 10:28:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:05.245 10:28:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:05.245 10:28:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:05.245 10:28:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:05.245 10:28:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:05.245 10:28:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:05.245 10:28:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:05.245 10:28:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:05.245 10:28:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:05.245 10:28:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:05.245 10:28:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:05.245 10:28:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:05.245 10:28:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:05.245 10:28:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:05.245 10:28:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:05.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:05.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:32:05.245 00:32:05.245 --- 10.0.0.2 ping statistics --- 00:32:05.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:05.245 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:32:05.245 10:28:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:05.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:05.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:32:05.245 00:32:05.245 --- 10.0.0.1 ping statistics --- 00:32:05.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:05.245 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:32:05.245 10:28:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:05.245 10:28:37 -- nvmf/common.sh@410 -- # return 0 00:32:05.245 10:28:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:05.245 10:28:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:05.245 10:28:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:05.245 10:28:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:05.245 10:28:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:05.245 10:28:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:05.245 10:28:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:05.245 10:28:37 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:05.245 10:28:37 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:32:05.245 10:28:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:05.245 10:28:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:05.245 10:28:37 -- common/autotest_common.sh@10 -- # set +x 00:32:05.245 ************************************ 00:32:05.245 START TEST nvmf_digest_clean 00:32:05.245 ************************************ 00:32:05.245 10:28:37 -- common/autotest_common.sh@1104 -- # run_digest 00:32:05.245 10:28:37 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:32:05.245 10:28:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:05.245 10:28:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:05.245 10:28:37 -- common/autotest_common.sh@10 -- # set +x 00:32:05.245 10:28:37 -- nvmf/common.sh@469 -- # nvmfpid=3639374 00:32:05.245 10:28:37 -- nvmf/common.sh@470 -- # waitforlisten 3639374 00:32:05.245 10:28:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:05.245 10:28:37 -- common/autotest_common.sh@819 -- # '[' -z 3639374 ']' 00:32:05.245 10:28:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:05.245 10:28:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:05.245 10:28:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:05.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:05.245 10:28:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:05.245 10:28:37 -- common/autotest_common.sh@10 -- # set +x 00:32:05.245 [2024-04-17 10:28:37.673941] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:05.245 [2024-04-17 10:28:37.673993] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:05.245 EAL: No free 2048 kB hugepages reported on node 1 00:32:05.245 [2024-04-17 10:28:37.759470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.245 [2024-04-17 10:28:37.846371] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:05.245 [2024-04-17 10:28:37.846510] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:05.245 [2024-04-17 10:28:37.846521] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:05.245 [2024-04-17 10:28:37.846531] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:05.245 [2024-04-17 10:28:37.846551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.503 10:28:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:05.503 10:28:38 -- common/autotest_common.sh@852 -- # return 0 00:32:05.504 10:28:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:05.504 10:28:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:05.504 10:28:38 -- common/autotest_common.sh@10 -- # set +x 00:32:05.504 10:28:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:05.504 10:28:38 -- host/digest.sh@120 -- # common_target_config 00:32:05.504 10:28:38 -- host/digest.sh@43 -- # rpc_cmd 00:32:05.504 10:28:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:05.504 10:28:38 -- common/autotest_common.sh@10 -- # set +x 00:32:05.504 null0 00:32:05.504 [2024-04-17 10:28:38.727394] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:05.504 [2024-04-17 10:28:38.751569] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:05.504 10:28:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:05.504 10:28:38 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:32:05.504 10:28:38 -- host/digest.sh@77 -- # local rw bs qd 00:32:05.504 10:28:38 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:05.504 10:28:38 -- host/digest.sh@80 -- # rw=randread 00:32:05.504 10:28:38 -- host/digest.sh@80 -- # bs=4096 00:32:05.504 10:28:38 -- host/digest.sh@80 -- # qd=128 00:32:05.504 10:28:38 -- host/digest.sh@82 -- # bperfpid=3639652 00:32:05.504 10:28:38 -- host/digest.sh@83 -- # waitforlisten 3639652 /var/tmp/bperf.sock 00:32:05.504 10:28:38 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:05.504 10:28:38 -- common/autotest_common.sh@819 -- # '[' -z 3639652 ']' 00:32:05.504 10:28:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:05.504 10:28:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:05.504 10:28:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:05.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:05.504 10:28:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:05.504 10:28:38 -- common/autotest_common.sh@10 -- # set +x 00:32:05.504 [2024-04-17 10:28:38.800463] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:05.504 [2024-04-17 10:28:38.800517] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3639652 ] 00:32:05.504 EAL: No free 2048 kB hugepages reported on node 1 00:32:05.762 [2024-04-17 10:28:38.875482] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.762 [2024-04-17 10:28:38.958561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:05.762 10:28:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:05.762 10:28:39 -- common/autotest_common.sh@852 -- # return 0 00:32:05.762 10:28:39 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:32:05.762 10:28:39 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:32:05.762 10:28:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:06.020 10:28:39 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:06.020 10:28:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:06.588 nvme0n1 00:32:06.588 10:28:39 -- host/digest.sh@91 -- # bperf_py perform_tests 00:32:06.588 10:28:39 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:06.846 Running I/O for 2 seconds... 00:32:08.747 00:32:08.747 Latency(us) 00:32:08.747 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:08.747 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:08.747 nvme0n1 : 2.01 14073.46 54.97 0.00 0.00 9086.32 3127.85 21209.83 00:32:08.747 =================================================================================================================== 00:32:08.747 Total : 14073.46 54.97 0.00 0.00 9086.32 3127.85 21209.83 00:32:08.747 0 00:32:08.747 10:28:41 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:32:08.747 10:28:42 -- host/digest.sh@92 -- # get_accel_stats 00:32:08.747 10:28:42 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:08.747 10:28:42 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:08.747 | select(.opcode=="crc32c") 00:32:08.747 | "\(.module_name) \(.executed)"' 00:32:08.747 10:28:42 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:09.008 10:28:42 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:32:09.008 10:28:42 -- host/digest.sh@93 -- # exp_module=software 00:32:09.008 10:28:42 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:32:09.008 10:28:42 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:09.008 10:28:42 -- host/digest.sh@97 -- # killprocess 3639652 00:32:09.008 10:28:42 -- common/autotest_common.sh@926 -- # '[' -z 3639652 ']' 00:32:09.008 10:28:42 -- common/autotest_common.sh@930 -- # kill -0 3639652 00:32:09.008 10:28:42 -- common/autotest_common.sh@931 -- # uname 00:32:09.008 10:28:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:09.008 10:28:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3639652 00:32:09.008 10:28:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:09.008 10:28:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:09.008 10:28:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3639652' 00:32:09.008 killing process with pid 3639652 00:32:09.008 10:28:42 -- common/autotest_common.sh@945 -- # kill 3639652 00:32:09.008 Received shutdown signal, test time was about 2.000000 seconds 00:32:09.008 00:32:09.008 Latency(us) 00:32:09.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:09.008 =================================================================================================================== 00:32:09.008 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:09.008 10:28:42 -- common/autotest_common.sh@950 -- # wait 3639652 00:32:09.272 10:28:42 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:32:09.272 10:28:42 -- host/digest.sh@77 -- # local rw bs qd 00:32:09.272 10:28:42 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:09.272 10:28:42 -- host/digest.sh@80 -- # rw=randread 00:32:09.272 10:28:42 -- host/digest.sh@80 -- # bs=131072 00:32:09.272 10:28:42 -- host/digest.sh@80 -- # qd=16 00:32:09.272 10:28:42 -- host/digest.sh@82 -- # bperfpid=3640239 00:32:09.272 10:28:42 -- host/digest.sh@83 -- # waitforlisten 3640239 /var/tmp/bperf.sock 00:32:09.272 10:28:42 -- common/autotest_common.sh@819 -- # '[' -z 3640239 ']' 00:32:09.272 10:28:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:09.272 10:28:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:09.272 10:28:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:09.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:09.272 10:28:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:09.272 10:28:42 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:09.272 10:28:42 -- common/autotest_common.sh@10 -- # set +x 00:32:09.272 [2024-04-17 10:28:42.549036] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:09.272 [2024-04-17 10:28:42.549085] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3640239 ] 00:32:09.272 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:09.272 Zero copy mechanism will not be used. 00:32:09.272 EAL: No free 2048 kB hugepages reported on node 1 00:32:09.530 [2024-04-17 10:28:42.610474] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.530 [2024-04-17 10:28:42.690873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.530 10:28:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:09.530 10:28:42 -- common/autotest_common.sh@852 -- # return 0 00:32:09.530 10:28:42 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:32:09.530 10:28:42 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:32:09.530 10:28:42 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:09.789 10:28:43 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:09.789 10:28:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:10.047 nvme0n1 00:32:10.047 10:28:43 -- host/digest.sh@91 -- # bperf_py perform_tests 00:32:10.047 10:28:43 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:10.306 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:10.306 Zero copy mechanism will not be used. 00:32:10.306 Running I/O for 2 seconds... 00:32:12.208 00:32:12.208 Latency(us) 00:32:12.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.208 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:12.208 nvme0n1 : 2.00 3940.36 492.54 0.00 0.00 4056.97 960.70 9889.98 00:32:12.208 =================================================================================================================== 00:32:12.208 Total : 3940.36 492.54 0.00 0.00 4056.97 960.70 9889.98 00:32:12.208 0 00:32:12.208 10:28:45 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:32:12.208 10:28:45 -- host/digest.sh@92 -- # get_accel_stats 00:32:12.208 10:28:45 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:12.208 10:28:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:12.208 10:28:45 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:12.208 | select(.opcode=="crc32c") 00:32:12.208 | "\(.module_name) \(.executed)"' 00:32:12.466 10:28:45 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:32:12.466 10:28:45 -- host/digest.sh@93 -- # exp_module=software 00:32:12.466 10:28:45 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:32:12.466 10:28:45 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:12.466 10:28:45 -- host/digest.sh@97 -- # killprocess 3640239 00:32:12.466 10:28:45 -- common/autotest_common.sh@926 -- # '[' -z 3640239 ']' 00:32:12.466 10:28:45 -- common/autotest_common.sh@930 -- # kill -0 3640239 00:32:12.466 10:28:45 -- common/autotest_common.sh@931 -- # uname 00:32:12.466 10:28:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:12.466 10:28:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3640239 00:32:12.466 10:28:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:12.466 10:28:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:12.466 10:28:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3640239' 00:32:12.466 killing process with pid 3640239 00:32:12.466 10:28:45 -- common/autotest_common.sh@945 -- # kill 3640239 00:32:12.466 Received shutdown signal, test time was about 2.000000 seconds 00:32:12.466 00:32:12.466 Latency(us) 00:32:12.466 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.466 =================================================================================================================== 00:32:12.466 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:12.466 10:28:45 -- common/autotest_common.sh@950 -- # wait 3640239 00:32:12.724 10:28:45 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:32:12.724 10:28:45 -- host/digest.sh@77 -- # local rw bs qd 00:32:12.724 10:28:45 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:12.724 10:28:45 -- host/digest.sh@80 -- # rw=randwrite 00:32:12.724 10:28:45 -- host/digest.sh@80 -- # bs=4096 00:32:12.724 10:28:45 -- host/digest.sh@80 -- # qd=128 00:32:12.724 10:28:45 -- host/digest.sh@82 -- # bperfpid=3640960 00:32:12.724 10:28:45 -- host/digest.sh@83 -- # waitforlisten 3640960 /var/tmp/bperf.sock 00:32:12.724 10:28:45 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:12.724 10:28:45 -- common/autotest_common.sh@819 -- # '[' -z 3640960 ']' 00:32:12.724 10:28:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:12.725 10:28:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:12.725 10:28:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:12.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:12.725 10:28:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:12.725 10:28:45 -- common/autotest_common.sh@10 -- # set +x 00:32:12.725 [2024-04-17 10:28:45.915757] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:12.725 [2024-04-17 10:28:45.915818] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3640960 ] 00:32:12.725 EAL: No free 2048 kB hugepages reported on node 1 00:32:12.725 [2024-04-17 10:28:45.989320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.988 [2024-04-17 10:28:46.078504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:12.988 10:28:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:12.988 10:28:46 -- common/autotest_common.sh@852 -- # return 0 00:32:12.988 10:28:46 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:32:12.988 10:28:46 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:32:12.988 10:28:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:13.246 10:28:46 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:13.246 10:28:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:13.504 nvme0n1 00:32:13.504 10:28:46 -- host/digest.sh@91 -- # bperf_py perform_tests 00:32:13.504 10:28:46 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:13.762 Running I/O for 2 seconds... 00:32:15.667 00:32:15.667 Latency(us) 00:32:15.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:15.667 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:15.667 nvme0n1 : 2.00 19127.60 74.72 0.00 0.00 6683.13 3366.17 16443.58 00:32:15.667 =================================================================================================================== 00:32:15.667 Total : 19127.60 74.72 0.00 0.00 6683.13 3366.17 16443.58 00:32:15.667 0 00:32:15.667 10:28:48 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:32:15.667 10:28:48 -- host/digest.sh@92 -- # get_accel_stats 00:32:15.667 10:28:48 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:15.667 10:28:48 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:15.667 | select(.opcode=="crc32c") 00:32:15.667 | "\(.module_name) \(.executed)"' 00:32:15.667 10:28:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:15.926 10:28:49 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:32:15.926 10:28:49 -- host/digest.sh@93 -- # exp_module=software 00:32:15.926 10:28:49 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:32:15.926 10:28:49 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:15.926 10:28:49 -- host/digest.sh@97 -- # killprocess 3640960 00:32:15.927 10:28:49 -- common/autotest_common.sh@926 -- # '[' -z 3640960 ']' 00:32:15.927 10:28:49 -- common/autotest_common.sh@930 -- # kill -0 3640960 00:32:15.927 10:28:49 -- common/autotest_common.sh@931 -- # uname 00:32:15.927 10:28:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:15.927 10:28:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3640960 00:32:15.927 10:28:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:15.927 10:28:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:15.927 10:28:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3640960' 00:32:15.927 killing process with pid 3640960 00:32:15.927 10:28:49 -- common/autotest_common.sh@945 -- # kill 3640960 00:32:15.927 Received shutdown signal, test time was about 2.000000 seconds 00:32:15.927 00:32:15.927 Latency(us) 00:32:15.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:15.927 =================================================================================================================== 00:32:15.927 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:15.927 10:28:49 -- common/autotest_common.sh@950 -- # wait 3640960 00:32:16.185 10:28:49 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:32:16.185 10:28:49 -- host/digest.sh@77 -- # local rw bs qd 00:32:16.186 10:28:49 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:16.186 10:28:49 -- host/digest.sh@80 -- # rw=randwrite 00:32:16.186 10:28:49 -- host/digest.sh@80 -- # bs=131072 00:32:16.186 10:28:49 -- host/digest.sh@80 -- # qd=16 00:32:16.186 10:28:49 -- host/digest.sh@82 -- # bperfpid=3641549 00:32:16.186 10:28:49 -- host/digest.sh@83 -- # waitforlisten 3641549 /var/tmp/bperf.sock 00:32:16.186 10:28:49 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:16.186 10:28:49 -- common/autotest_common.sh@819 -- # '[' -z 3641549 ']' 00:32:16.186 10:28:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:16.186 10:28:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:16.186 10:28:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:16.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:16.186 10:28:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:16.186 10:28:49 -- common/autotest_common.sh@10 -- # set +x 00:32:16.444 [2024-04-17 10:28:49.521544] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:16.444 [2024-04-17 10:28:49.521603] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3641549 ] 00:32:16.444 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:16.444 Zero copy mechanism will not be used. 00:32:16.444 EAL: No free 2048 kB hugepages reported on node 1 00:32:16.444 [2024-04-17 10:28:49.593424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.444 [2024-04-17 10:28:49.675381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:16.444 10:28:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:16.444 10:28:49 -- common/autotest_common.sh@852 -- # return 0 00:32:16.444 10:28:49 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:32:16.444 10:28:49 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:32:16.444 10:28:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:17.010 10:28:50 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:17.010 10:28:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:17.268 nvme0n1 00:32:17.268 10:28:50 -- host/digest.sh@91 -- # bperf_py perform_tests 00:32:17.268 10:28:50 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:17.526 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:17.526 Zero copy mechanism will not be used. 00:32:17.526 Running I/O for 2 seconds... 00:32:19.431 00:32:19.431 Latency(us) 00:32:19.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.431 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:19.431 nvme0n1 : 2.00 5675.20 709.40 0.00 0.00 2813.32 2174.60 9353.77 00:32:19.431 =================================================================================================================== 00:32:19.431 Total : 5675.20 709.40 0.00 0.00 2813.32 2174.60 9353.77 00:32:19.431 0 00:32:19.431 10:28:52 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:32:19.431 10:28:52 -- host/digest.sh@92 -- # get_accel_stats 00:32:19.431 10:28:52 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:19.431 10:28:52 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:19.431 | select(.opcode=="crc32c") 00:32:19.431 | "\(.module_name) \(.executed)"' 00:32:19.431 10:28:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:19.690 10:28:52 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:32:19.690 10:28:52 -- host/digest.sh@93 -- # exp_module=software 00:32:19.690 10:28:52 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:32:19.690 10:28:52 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:19.690 10:28:52 -- host/digest.sh@97 -- # killprocess 3641549 00:32:19.690 10:28:52 -- common/autotest_common.sh@926 -- # '[' -z 3641549 ']' 00:32:19.691 10:28:52 -- common/autotest_common.sh@930 -- # kill -0 3641549 00:32:19.691 10:28:52 -- common/autotest_common.sh@931 -- # uname 00:32:19.691 10:28:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:19.691 10:28:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3641549 00:32:19.691 10:28:52 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:19.691 10:28:52 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:19.691 10:28:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3641549' 00:32:19.691 killing process with pid 3641549 00:32:19.691 10:28:52 -- common/autotest_common.sh@945 -- # kill 3641549 00:32:19.691 Received shutdown signal, test time was about 2.000000 seconds 00:32:19.691 00:32:19.691 Latency(us) 00:32:19.691 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.691 =================================================================================================================== 00:32:19.691 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:19.691 10:28:52 -- common/autotest_common.sh@950 -- # wait 3641549 00:32:19.949 10:28:53 -- host/digest.sh@126 -- # killprocess 3639374 00:32:19.949 10:28:53 -- common/autotest_common.sh@926 -- # '[' -z 3639374 ']' 00:32:19.949 10:28:53 -- common/autotest_common.sh@930 -- # kill -0 3639374 00:32:19.949 10:28:53 -- common/autotest_common.sh@931 -- # uname 00:32:19.949 10:28:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:19.949 10:28:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3639374 00:32:19.949 10:28:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:19.949 10:28:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:19.949 10:28:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3639374' 00:32:19.949 killing process with pid 3639374 00:32:19.949 10:28:53 -- common/autotest_common.sh@945 -- # kill 3639374 00:32:19.949 10:28:53 -- common/autotest_common.sh@950 -- # wait 3639374 00:32:20.209 00:32:20.209 real 0m15.837s 00:32:20.209 user 0m30.777s 00:32:20.209 sys 0m4.306s 00:32:20.209 10:28:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:20.209 10:28:53 -- common/autotest_common.sh@10 -- # set +x 00:32:20.209 ************************************ 00:32:20.209 END TEST nvmf_digest_clean 00:32:20.209 ************************************ 00:32:20.209 10:28:53 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:32:20.209 10:28:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:20.209 10:28:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:20.209 10:28:53 -- common/autotest_common.sh@10 -- # set +x 00:32:20.209 ************************************ 00:32:20.209 START TEST nvmf_digest_error 00:32:20.209 ************************************ 00:32:20.209 10:28:53 -- common/autotest_common.sh@1104 -- # run_digest_error 00:32:20.209 10:28:53 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:32:20.209 10:28:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:20.209 10:28:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:20.209 10:28:53 -- common/autotest_common.sh@10 -- # set +x 00:32:20.209 10:28:53 -- nvmf/common.sh@469 -- # nvmfpid=3642367 00:32:20.209 10:28:53 -- nvmf/common.sh@470 -- # waitforlisten 3642367 00:32:20.209 10:28:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:20.209 10:28:53 -- common/autotest_common.sh@819 -- # '[' -z 3642367 ']' 00:32:20.209 10:28:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:20.209 10:28:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:20.209 10:28:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:20.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:20.209 10:28:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:20.209 10:28:53 -- common/autotest_common.sh@10 -- # set +x 00:32:20.468 [2024-04-17 10:28:53.552660] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:20.468 [2024-04-17 10:28:53.552707] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:20.468 EAL: No free 2048 kB hugepages reported on node 1 00:32:20.468 [2024-04-17 10:28:53.625027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.468 [2024-04-17 10:28:53.712910] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:20.468 [2024-04-17 10:28:53.713056] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:20.468 [2024-04-17 10:28:53.713068] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:20.468 [2024-04-17 10:28:53.713078] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:20.468 [2024-04-17 10:28:53.713098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:20.468 10:28:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:20.468 10:28:53 -- common/autotest_common.sh@852 -- # return 0 00:32:20.468 10:28:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:20.468 10:28:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:20.468 10:28:53 -- common/autotest_common.sh@10 -- # set +x 00:32:20.468 10:28:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:20.468 10:28:53 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:20.468 10:28:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:20.468 10:28:53 -- common/autotest_common.sh@10 -- # set +x 00:32:20.468 [2024-04-17 10:28:53.793624] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:20.468 10:28:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:20.468 10:28:53 -- host/digest.sh@104 -- # common_target_config 00:32:20.468 10:28:53 -- host/digest.sh@43 -- # rpc_cmd 00:32:20.468 10:28:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:20.468 10:28:53 -- common/autotest_common.sh@10 -- # set +x 00:32:20.727 null0 00:32:20.727 [2024-04-17 10:28:53.891856] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:20.727 [2024-04-17 10:28:53.916042] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:20.727 10:28:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:20.727 10:28:53 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:32:20.727 10:28:53 -- host/digest.sh@54 -- # local rw bs qd 00:32:20.727 10:28:53 -- host/digest.sh@56 -- # rw=randread 00:32:20.727 10:28:53 -- host/digest.sh@56 -- # bs=4096 00:32:20.727 10:28:53 -- host/digest.sh@56 -- # qd=128 00:32:20.727 10:28:53 -- host/digest.sh@58 -- # bperfpid=3642396 00:32:20.727 10:28:53 -- host/digest.sh@60 -- # waitforlisten 3642396 /var/tmp/bperf.sock 00:32:20.727 10:28:53 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:20.727 10:28:53 -- common/autotest_common.sh@819 -- # '[' -z 3642396 ']' 00:32:20.727 10:28:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:20.727 10:28:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:20.727 10:28:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:20.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:20.727 10:28:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:20.727 10:28:53 -- common/autotest_common.sh@10 -- # set +x 00:32:20.727 [2024-04-17 10:28:53.966376] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:20.727 [2024-04-17 10:28:53.966429] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642396 ] 00:32:20.727 EAL: No free 2048 kB hugepages reported on node 1 00:32:20.727 [2024-04-17 10:28:54.039991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.985 [2024-04-17 10:28:54.128219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:21.920 10:28:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:21.921 10:28:54 -- common/autotest_common.sh@852 -- # return 0 00:32:21.921 10:28:54 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:21.921 10:28:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:21.921 10:28:55 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:21.921 10:28:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:21.921 10:28:55 -- common/autotest_common.sh@10 -- # set +x 00:32:21.921 10:28:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:21.921 10:28:55 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:21.921 10:28:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:22.488 nvme0n1 00:32:22.488 10:28:55 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:22.488 10:28:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:22.488 10:28:55 -- common/autotest_common.sh@10 -- # set +x 00:32:22.488 10:28:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:22.488 10:28:55 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:22.488 10:28:55 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:22.488 Running I/O for 2 seconds... 00:32:22.488 [2024-04-17 10:28:55.753750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:22.488 [2024-04-17 10:28:55.753788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.488 [2024-04-17 10:28:55.753803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.488 [2024-04-17 10:28:55.772074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:22.488 [2024-04-17 10:28:55.772103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.488 [2024-04-17 10:28:55.772116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.488 [2024-04-17 10:28:55.788831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:22.488 [2024-04-17 10:28:55.788859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.488 [2024-04-17 10:28:55.788871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.488 [2024-04-17 10:28:55.802120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:22.488 [2024-04-17 10:28:55.802149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.488 [2024-04-17 10:28:55.802161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.488 [2024-04-17 10:28:55.820221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:22.488 [2024-04-17 10:28:55.820248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.488 [2024-04-17 10:28:55.820261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.747 [2024-04-17 10:28:55.838935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:22.747 [2024-04-17 10:28:55.838970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.747 [2024-04-17 10:28:55.838982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.747 [2024-04-17 10:28:55.857529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:22.747 [2024-04-17 10:28:55.857558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.747 [2024-04-17 10:28:55.857571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.747 [2024-04-17 10:28:55.875884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:22.747 [2024-04-17 10:28:55.875913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.747 [2024-04-17 10:28:55.875926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.747 [2024-04-17 10:28:55.893011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:22.747 [2024-04-17 10:28:55.893039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.747 [2024-04-17 10:28:55.893052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.747 [2024-04-17 10:28:55.906555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:22.747 [2024-04-17 10:28:55.906583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.747 [2024-04-17 10:28:55.906595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.747 [2024-04-17 10:28:55.924701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:22.747 [2024-04-17 10:28:55.924728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.747 [2024-04-17 10:28:55.924740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.747 [2024-04-17 10:28:55.943236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:22.747 [2024-04-17 10:28:55.943262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.747 [2024-04-17 10:28:55.943274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.747 [2024-04-17 10:28:55.961977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:22.747 [2024-04-17 10:28:55.962003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.747 [2024-04-17 10:28:55.962015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.747 [2024-04-17 10:28:55.979496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:22.747 [2024-04-17 10:28:55.979523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.747 [2024-04-17 10:28:55.979535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.747 [2024-04-17 10:28:55.997770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:22.747 [2024-04-17 10:28:55.997797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.747 [2024-04-17 10:28:55.997809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.747 [2024-04-17 10:28:56.016440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:22.747 [2024-04-17 10:28:56.016467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.747 [2024-04-17 10:28:56.016479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.747 [2024-04-17 10:28:56.034862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:22.747 [2024-04-17 10:28:56.034890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.747 [2024-04-17 10:28:56.034903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.747 [2024-04-17 10:28:56.053372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:22.747 [2024-04-17 10:28:56.053400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.747 [2024-04-17 10:28:56.053412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:22.747 [2024-04-17 10:28:56.071868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:22.748 [2024-04-17 10:28:56.071897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:22.748 [2024-04-17 10:28:56.071909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.006 [2024-04-17 10:28:56.090006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.006 [2024-04-17 10:28:56.090034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.006 [2024-04-17 10:28:56.090045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.006 [2024-04-17 10:28:56.108020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.006 [2024-04-17 10:28:56.108049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.006 [2024-04-17 10:28:56.108061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.006 [2024-04-17 10:28:56.126594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.006 [2024-04-17 10:28:56.126622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.006 [2024-04-17 10:28:56.126634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.006 [2024-04-17 10:28:56.144889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.006 [2024-04-17 10:28:56.144916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.006 [2024-04-17 10:28:56.144932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.006 [2024-04-17 10:28:56.162894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.006 [2024-04-17 10:28:56.162921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.006 [2024-04-17 10:28:56.162933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.007 [2024-04-17 10:28:56.180947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.007 [2024-04-17 10:28:56.180974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.007 [2024-04-17 10:28:56.180986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.007 [2024-04-17 10:28:56.199195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.007 [2024-04-17 10:28:56.199223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.007 [2024-04-17 10:28:56.199235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.007 [2024-04-17 10:28:56.217662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.007 [2024-04-17 10:28:56.217690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.007 [2024-04-17 10:28:56.217702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.007 [2024-04-17 10:28:56.235912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.007 [2024-04-17 10:28:56.235938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.007 [2024-04-17 10:28:56.235951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.007 [2024-04-17 10:28:56.254202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.007 [2024-04-17 10:28:56.254228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.007 [2024-04-17 10:28:56.254240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.007 [2024-04-17 10:28:56.272826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.007 [2024-04-17 10:28:56.272853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.007 [2024-04-17 10:28:56.272865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.007 [2024-04-17 10:28:56.291199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.007 [2024-04-17 10:28:56.291227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.007 [2024-04-17 10:28:56.291239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.007 [2024-04-17 10:28:56.308943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.007 [2024-04-17 10:28:56.308970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.007 [2024-04-17 10:28:56.308982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.007 [2024-04-17 10:28:56.327607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.007 [2024-04-17 10:28:56.327635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.007 [2024-04-17 10:28:56.327654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.266 [2024-04-17 10:28:56.346742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.266 [2024-04-17 10:28:56.346769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.266 [2024-04-17 10:28:56.346780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.266 [2024-04-17 10:28:56.363570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.266 [2024-04-17 10:28:56.363596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.266 [2024-04-17 10:28:56.363609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.266 [2024-04-17 10:28:56.382257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.266 [2024-04-17 10:28:56.382284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.266 [2024-04-17 10:28:56.382296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.266 [2024-04-17 10:28:56.400558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.266 [2024-04-17 10:28:56.400586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.266 [2024-04-17 10:28:56.400598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.266 [2024-04-17 10:28:56.418404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.266 [2024-04-17 10:28:56.418432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.266 [2024-04-17 10:28:56.418444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.266 [2024-04-17 10:28:56.436633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.266 [2024-04-17 10:28:56.436665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.266 [2024-04-17 10:28:56.436677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.266 [2024-04-17 10:28:56.454710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.266 [2024-04-17 10:28:56.454737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.266 [2024-04-17 10:28:56.454754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.266 [2024-04-17 10:28:56.472874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.266 [2024-04-17 10:28:56.472901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.266 [2024-04-17 10:28:56.472913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.266 [2024-04-17 10:28:56.490874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.266 [2024-04-17 10:28:56.490905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.266 [2024-04-17 10:28:56.490918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.266 [2024-04-17 10:28:56.509194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.266 [2024-04-17 10:28:56.509220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.266 [2024-04-17 10:28:56.509232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.266 [2024-04-17 10:28:56.527599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.266 [2024-04-17 10:28:56.527626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.266 [2024-04-17 10:28:56.527638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.266 [2024-04-17 10:28:56.546076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.266 [2024-04-17 10:28:56.546104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.266 [2024-04-17 10:28:56.546115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.266 [2024-04-17 10:28:56.564425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.266 [2024-04-17 10:28:56.564452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.266 [2024-04-17 10:28:56.564464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.266 [2024-04-17 10:28:56.582499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.266 [2024-04-17 10:28:56.582527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.266 [2024-04-17 10:28:56.582539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.525 [2024-04-17 10:28:56.600807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.525 [2024-04-17 10:28:56.600834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.525 [2024-04-17 10:28:56.600846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.525 [2024-04-17 10:28:56.618695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.525 [2024-04-17 10:28:56.618726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.525 [2024-04-17 10:28:56.618739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.525 [2024-04-17 10:28:56.636893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.525 [2024-04-17 10:28:56.636919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.525 [2024-04-17 10:28:56.636931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.525 [2024-04-17 10:28:56.655154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.525 [2024-04-17 10:28:56.655180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.526 [2024-04-17 10:28:56.655192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.526 [2024-04-17 10:28:56.673461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.526 [2024-04-17 10:28:56.673487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.526 [2024-04-17 10:28:56.673499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.526 [2024-04-17 10:28:56.691640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.526 [2024-04-17 10:28:56.691671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.526 [2024-04-17 10:28:56.691683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.526 [2024-04-17 10:28:56.709981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.526 [2024-04-17 10:28:56.710007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.526 [2024-04-17 10:28:56.710019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.526 [2024-04-17 10:28:56.728391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.526 [2024-04-17 10:28:56.728418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.526 [2024-04-17 10:28:56.728430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.526 [2024-04-17 10:28:56.746898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.526 [2024-04-17 10:28:56.746925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.526 [2024-04-17 10:28:56.746937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.526 [2024-04-17 10:28:56.765311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.526 [2024-04-17 10:28:56.765337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.526 [2024-04-17 10:28:56.765349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.526 [2024-04-17 10:28:56.783584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.526 [2024-04-17 10:28:56.783611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.526 [2024-04-17 10:28:56.783623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.526 [2024-04-17 10:28:56.802131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.526 [2024-04-17 10:28:56.802158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.526 [2024-04-17 10:28:56.802171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.526 [2024-04-17 10:28:56.820544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.526 [2024-04-17 10:28:56.820571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.526 [2024-04-17 10:28:56.820583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.526 [2024-04-17 10:28:56.839168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.526 [2024-04-17 10:28:56.839195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.526 [2024-04-17 10:28:56.839207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.526 [2024-04-17 10:28:56.857368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.526 [2024-04-17 10:28:56.857395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.526 [2024-04-17 10:28:56.857407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.785 [2024-04-17 10:28:56.875848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.785 [2024-04-17 10:28:56.875874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.785 [2024-04-17 10:28:56.875886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.785 [2024-04-17 10:28:56.894575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.785 [2024-04-17 10:28:56.894601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.785 [2024-04-17 10:28:56.894613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.785 [2024-04-17 10:28:56.906366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.785 [2024-04-17 10:28:56.906392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.785 [2024-04-17 10:28:56.906404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.785 [2024-04-17 10:28:56.925099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.785 [2024-04-17 10:28:56.925125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.785 [2024-04-17 10:28:56.925142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.785 [2024-04-17 10:28:56.942442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.785 [2024-04-17 10:28:56.942469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.785 [2024-04-17 10:28:56.942481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.785 [2024-04-17 10:28:56.960723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.785 [2024-04-17 10:28:56.960749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.785 [2024-04-17 10:28:56.960761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.785 [2024-04-17 10:28:56.978944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.785 [2024-04-17 10:28:56.978971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.785 [2024-04-17 10:28:56.978983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.785 [2024-04-17 10:28:56.997332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.785 [2024-04-17 10:28:56.997359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.785 [2024-04-17 10:28:56.997371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.785 [2024-04-17 10:28:57.015583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.785 [2024-04-17 10:28:57.015610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.785 [2024-04-17 10:28:57.015622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.785 [2024-04-17 10:28:57.033867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.785 [2024-04-17 10:28:57.033894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.785 [2024-04-17 10:28:57.033907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.785 [2024-04-17 10:28:57.052936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.785 [2024-04-17 10:28:57.052962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.785 [2024-04-17 10:28:57.052974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.785 [2024-04-17 10:28:57.071103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.785 [2024-04-17 10:28:57.071128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.785 [2024-04-17 10:28:57.071140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.785 [2024-04-17 10:28:57.089282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.785 [2024-04-17 10:28:57.089311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.785 [2024-04-17 10:28:57.089323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.785 [2024-04-17 10:28:57.107451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:23.785 [2024-04-17 10:28:57.107478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.785 [2024-04-17 10:28:57.107489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.048 [2024-04-17 10:28:57.125485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.048 [2024-04-17 10:28:57.125511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.048 [2024-04-17 10:28:57.125523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.048 [2024-04-17 10:28:57.143625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.048 [2024-04-17 10:28:57.143656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.048 [2024-04-17 10:28:57.143668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.048 [2024-04-17 10:28:57.161804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.048 [2024-04-17 10:28:57.161830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.048 [2024-04-17 10:28:57.161842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.048 [2024-04-17 10:28:57.180504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.048 [2024-04-17 10:28:57.180530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.048 [2024-04-17 10:28:57.180542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.048 [2024-04-17 10:28:57.197684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.048 [2024-04-17 10:28:57.197712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.048 [2024-04-17 10:28:57.197724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.048 [2024-04-17 10:28:57.215973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.048 [2024-04-17 10:28:57.216000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.048 [2024-04-17 10:28:57.216012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.048 [2024-04-17 10:28:57.235069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.048 [2024-04-17 10:28:57.235094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.048 [2024-04-17 10:28:57.235106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.048 [2024-04-17 10:28:57.253378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.048 [2024-04-17 10:28:57.253404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.048 [2024-04-17 10:28:57.253416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.048 [2024-04-17 10:28:57.271600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.048 [2024-04-17 10:28:57.271626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.048 [2024-04-17 10:28:57.271638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.048 [2024-04-17 10:28:57.289772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.048 [2024-04-17 10:28:57.289798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.048 [2024-04-17 10:28:57.289809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.048 [2024-04-17 10:28:57.307548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.048 [2024-04-17 10:28:57.307575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.048 [2024-04-17 10:28:57.307587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.048 [2024-04-17 10:28:57.325873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.048 [2024-04-17 10:28:57.325900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.048 [2024-04-17 10:28:57.325911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.048 [2024-04-17 10:28:57.344179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.048 [2024-04-17 10:28:57.344206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.048 [2024-04-17 10:28:57.344218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.048 [2024-04-17 10:28:57.362852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.048 [2024-04-17 10:28:57.362879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.048 [2024-04-17 10:28:57.362890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.338 [2024-04-17 10:28:57.381178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.338 [2024-04-17 10:28:57.381209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.338 [2024-04-17 10:28:57.381222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.338 [2024-04-17 10:28:57.400336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.338 [2024-04-17 10:28:57.400367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.338 [2024-04-17 10:28:57.400380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.338 [2024-04-17 10:28:57.417785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.338 [2024-04-17 10:28:57.417812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.338 [2024-04-17 10:28:57.417824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.338 [2024-04-17 10:28:57.436842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.338 [2024-04-17 10:28:57.436869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.339 [2024-04-17 10:28:57.436880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.339 [2024-04-17 10:28:57.455012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.339 [2024-04-17 10:28:57.455038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.339 [2024-04-17 10:28:57.455050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.339 [2024-04-17 10:28:57.473277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.339 [2024-04-17 10:28:57.473304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.339 [2024-04-17 10:28:57.473316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.339 [2024-04-17 10:28:57.491303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.339 [2024-04-17 10:28:57.491329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.339 [2024-04-17 10:28:57.491341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.339 [2024-04-17 10:28:57.509537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.339 [2024-04-17 10:28:57.509564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.339 [2024-04-17 10:28:57.509576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.339 [2024-04-17 10:28:57.527939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.339 [2024-04-17 10:28:57.527966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.339 [2024-04-17 10:28:57.527978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.339 [2024-04-17 10:28:57.546475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.339 [2024-04-17 10:28:57.546502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.339 [2024-04-17 10:28:57.546514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.339 [2024-04-17 10:28:57.564728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.339 [2024-04-17 10:28:57.564755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.339 [2024-04-17 10:28:57.564768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.339 [2024-04-17 10:28:57.582893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.339 [2024-04-17 10:28:57.582920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.339 [2024-04-17 10:28:57.582932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.339 [2024-04-17 10:28:57.602021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.339 [2024-04-17 10:28:57.602048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.339 [2024-04-17 10:28:57.602061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.339 [2024-04-17 10:28:57.619348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.339 [2024-04-17 10:28:57.619377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.339 [2024-04-17 10:28:57.619390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.339 [2024-04-17 10:28:57.638058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.339 [2024-04-17 10:28:57.638085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.339 [2024-04-17 10:28:57.638096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.339 [2024-04-17 10:28:57.657083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.339 [2024-04-17 10:28:57.657110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.339 [2024-04-17 10:28:57.657123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.622 [2024-04-17 10:28:57.675361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.622 [2024-04-17 10:28:57.675388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.622 [2024-04-17 10:28:57.675400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.622 [2024-04-17 10:28:57.693817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.622 [2024-04-17 10:28:57.693845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.622 [2024-04-17 10:28:57.693857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.622 [2024-04-17 10:28:57.712489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.622 [2024-04-17 10:28:57.712516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.622 [2024-04-17 10:28:57.712534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.622 [2024-04-17 10:28:57.731382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaaff00) 00:32:24.622 [2024-04-17 10:28:57.731409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:24.622 [2024-04-17 10:28:57.731422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:24.622 00:32:24.622 Latency(us) 00:32:24.622 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.622 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:24.622 nvme0n1 : 2.01 14026.46 54.79 0.00 0.00 9113.98 4081.11 27167.65 00:32:24.622 =================================================================================================================== 00:32:24.622 Total : 14026.46 54.79 0.00 0.00 9113.98 4081.11 27167.65 00:32:24.622 0 00:32:24.622 10:28:57 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:24.622 10:28:57 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:24.622 10:28:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:24.622 10:28:57 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:24.622 | .driver_specific 00:32:24.622 | .nvme_error 00:32:24.622 | .status_code 00:32:24.622 | .command_transient_transport_error' 00:32:24.881 10:28:58 -- host/digest.sh@71 -- # (( 110 > 0 )) 00:32:24.881 10:28:58 -- host/digest.sh@73 -- # killprocess 3642396 00:32:24.881 10:28:58 -- common/autotest_common.sh@926 -- # '[' -z 3642396 ']' 00:32:24.881 10:28:58 -- common/autotest_common.sh@930 -- # kill -0 3642396 00:32:24.881 10:28:58 -- common/autotest_common.sh@931 -- # uname 00:32:24.881 10:28:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:24.881 10:28:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3642396 00:32:24.881 10:28:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:24.881 10:28:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:24.881 10:28:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3642396' 00:32:24.881 killing process with pid 3642396 00:32:24.881 10:28:58 -- common/autotest_common.sh@945 -- # kill 3642396 00:32:24.881 Received shutdown signal, test time was about 2.000000 seconds 00:32:24.881 00:32:24.881 Latency(us) 00:32:24.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.881 =================================================================================================================== 00:32:24.881 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:24.881 10:28:58 -- common/autotest_common.sh@950 -- # wait 3642396 00:32:25.140 10:28:58 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:32:25.140 10:28:58 -- host/digest.sh@54 -- # local rw bs qd 00:32:25.140 10:28:58 -- host/digest.sh@56 -- # rw=randread 00:32:25.140 10:28:58 -- host/digest.sh@56 -- # bs=131072 00:32:25.140 10:28:58 -- host/digest.sh@56 -- # qd=16 00:32:25.140 10:28:58 -- host/digest.sh@58 -- # bperfpid=3643205 00:32:25.140 10:28:58 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:32:25.140 10:28:58 -- host/digest.sh@60 -- # waitforlisten 3643205 /var/tmp/bperf.sock 00:32:25.140 10:28:58 -- common/autotest_common.sh@819 -- # '[' -z 3643205 ']' 00:32:25.140 10:28:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:25.140 10:28:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:25.140 10:28:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:25.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:25.140 10:28:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:25.140 10:28:58 -- common/autotest_common.sh@10 -- # set +x 00:32:25.140 [2024-04-17 10:28:58.320230] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:25.140 [2024-04-17 10:28:58.320291] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3643205 ] 00:32:25.140 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:25.140 Zero copy mechanism will not be used. 00:32:25.140 EAL: No free 2048 kB hugepages reported on node 1 00:32:25.140 [2024-04-17 10:28:58.394257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.399 [2024-04-17 10:28:58.476056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:25.965 10:28:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:25.965 10:28:59 -- common/autotest_common.sh@852 -- # return 0 00:32:25.965 10:28:59 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:25.965 10:28:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:26.224 10:28:59 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:26.224 10:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.224 10:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.224 10:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.224 10:28:59 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:26.224 10:28:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:26.792 nvme0n1 00:32:26.792 10:28:59 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:26.792 10:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.792 10:28:59 -- common/autotest_common.sh@10 -- # set +x 00:32:26.792 10:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.792 10:28:59 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:26.792 10:28:59 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:26.792 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:26.792 Zero copy mechanism will not be used. 00:32:26.792 Running I/O for 2 seconds... 00:32:26.792 [2024-04-17 10:29:00.068874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:26.792 [2024-04-17 10:29:00.068915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.792 [2024-04-17 10:29:00.068930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.792 [2024-04-17 10:29:00.078629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:26.792 [2024-04-17 10:29:00.078665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.792 [2024-04-17 10:29:00.078679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.792 [2024-04-17 10:29:00.087867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:26.792 [2024-04-17 10:29:00.087896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.792 [2024-04-17 10:29:00.087909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.792 [2024-04-17 10:29:00.097485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:26.792 [2024-04-17 10:29:00.097520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.792 [2024-04-17 10:29:00.097533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.792 [2024-04-17 10:29:00.106001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:26.792 [2024-04-17 10:29:00.106029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.792 [2024-04-17 10:29:00.106042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.792 [2024-04-17 10:29:00.115138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:26.792 [2024-04-17 10:29:00.115168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.792 [2024-04-17 10:29:00.115180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.052 [2024-04-17 10:29:00.124383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.052 [2024-04-17 10:29:00.124412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.052 [2024-04-17 10:29:00.124425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.052 [2024-04-17 10:29:00.133895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.052 [2024-04-17 10:29:00.133923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.052 [2024-04-17 10:29:00.133935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.052 [2024-04-17 10:29:00.143044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.053 [2024-04-17 10:29:00.143072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.053 [2024-04-17 10:29:00.143085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.053 [2024-04-17 10:29:00.151557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.053 [2024-04-17 10:29:00.151584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.053 [2024-04-17 10:29:00.151597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.053 [2024-04-17 10:29:00.159535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.053 [2024-04-17 10:29:00.159562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.053 [2024-04-17 10:29:00.159574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.053 [2024-04-17 10:29:00.167229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.053 [2024-04-17 10:29:00.167255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.053 [2024-04-17 10:29:00.167267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.053 [2024-04-17 10:29:00.174888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.053 [2024-04-17 10:29:00.174914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.053 [2024-04-17 10:29:00.174926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.053 [2024-04-17 10:29:00.182651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.053 [2024-04-17 10:29:00.182677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.053 [2024-04-17 10:29:00.182689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.053 [2024-04-17 10:29:00.190203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.053 [2024-04-17 10:29:00.190230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.053 [2024-04-17 10:29:00.190243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.053 [2024-04-17 10:29:00.197672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.053 [2024-04-17 10:29:00.197700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.053 [2024-04-17 10:29:00.197713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.053 [2024-04-17 10:29:00.205409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.053 [2024-04-17 10:29:00.205436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.053 [2024-04-17 10:29:00.205449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.053 [2024-04-17 10:29:00.213476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.053 [2024-04-17 10:29:00.213503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.053 [2024-04-17 10:29:00.213516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.053 [2024-04-17 10:29:00.221578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.053 [2024-04-17 10:29:00.221606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.053 [2024-04-17 10:29:00.221619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.053 [2024-04-17 10:29:00.229409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.053 [2024-04-17 10:29:00.229435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.053 [2024-04-17 10:29:00.229448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.053 [2024-04-17 10:29:00.237261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.053 [2024-04-17 10:29:00.237288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.053 [2024-04-17 10:29:00.237305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.053 [2024-04-17 10:29:00.244796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.053 [2024-04-17 10:29:00.244823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.053 [2024-04-17 10:29:00.244835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.053 [2024-04-17 10:29:00.253616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.053 [2024-04-17 10:29:00.253650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.053 [2024-04-17 10:29:00.253663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.053 [2024-04-17 10:29:00.262146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.053 [2024-04-17 10:29:00.262173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.053 [2024-04-17 10:29:00.262186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.053 [2024-04-17 10:29:00.270333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.053 [2024-04-17 10:29:00.270361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.053 [2024-04-17 10:29:00.270373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.053 [2024-04-17 10:29:00.278230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.053 [2024-04-17 10:29:00.278258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.053 [2024-04-17 10:29:00.278271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.053 [2024-04-17 10:29:00.286022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.053 [2024-04-17 10:29:00.286049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.053 [2024-04-17 10:29:00.286061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.053 [2024-04-17 10:29:00.293506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.053 [2024-04-17 10:29:00.293533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.053 [2024-04-17 10:29:00.293545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.053 [2024-04-17 10:29:00.302401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.053 [2024-04-17 10:29:00.302429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.053 [2024-04-17 10:29:00.302441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.053 [2024-04-17 10:29:00.312344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.053 [2024-04-17 10:29:00.312376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.053 [2024-04-17 10:29:00.312389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.053 [2024-04-17 10:29:00.322344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.053 [2024-04-17 10:29:00.322371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.053 [2024-04-17 10:29:00.322384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.053 [2024-04-17 10:29:00.332405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.053 [2024-04-17 10:29:00.332432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.053 [2024-04-17 10:29:00.332445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.053 [2024-04-17 10:29:00.342748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.053 [2024-04-17 10:29:00.342775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.053 [2024-04-17 10:29:00.342787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.053 [2024-04-17 10:29:00.353426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.053 [2024-04-17 10:29:00.353453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.053 [2024-04-17 10:29:00.353467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.053 [2024-04-17 10:29:00.363036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.054 [2024-04-17 10:29:00.363064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.054 [2024-04-17 10:29:00.363076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.054 [2024-04-17 10:29:00.372729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.054 [2024-04-17 10:29:00.372758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.054 [2024-04-17 10:29:00.372770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.054 [2024-04-17 10:29:00.382602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.054 [2024-04-17 10:29:00.382634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.054 [2024-04-17 10:29:00.382654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.313 [2024-04-17 10:29:00.391667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.313 [2024-04-17 10:29:00.391693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.313 [2024-04-17 10:29:00.391710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.313 [2024-04-17 10:29:00.400531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.313 [2024-04-17 10:29:00.400559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.313 [2024-04-17 10:29:00.400571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.313 [2024-04-17 10:29:00.409427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.313 [2024-04-17 10:29:00.409454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.313 [2024-04-17 10:29:00.409466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.313 [2024-04-17 10:29:00.418293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.313 [2024-04-17 10:29:00.418322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.313 [2024-04-17 10:29:00.418334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.313 [2024-04-17 10:29:00.428768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.313 [2024-04-17 10:29:00.428795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.313 [2024-04-17 10:29:00.428808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.313 [2024-04-17 10:29:00.438238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.313 [2024-04-17 10:29:00.438266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.313 [2024-04-17 10:29:00.438278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.313 [2024-04-17 10:29:00.447133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.313 [2024-04-17 10:29:00.447160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.313 [2024-04-17 10:29:00.447173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.313 [2024-04-17 10:29:00.455839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.313 [2024-04-17 10:29:00.455865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.313 [2024-04-17 10:29:00.455877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.313 [2024-04-17 10:29:00.464656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.313 [2024-04-17 10:29:00.464684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.313 [2024-04-17 10:29:00.464696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.313 [2024-04-17 10:29:00.472928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.313 [2024-04-17 10:29:00.472960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.313 [2024-04-17 10:29:00.472973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.313 [2024-04-17 10:29:00.481286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.313 [2024-04-17 10:29:00.481312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.313 [2024-04-17 10:29:00.481323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.313 [2024-04-17 10:29:00.489659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.313 [2024-04-17 10:29:00.489686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.313 [2024-04-17 10:29:00.489698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.313 [2024-04-17 10:29:00.499945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.313 [2024-04-17 10:29:00.499972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.313 [2024-04-17 10:29:00.499985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.313 [2024-04-17 10:29:00.510269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.313 [2024-04-17 10:29:00.510296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.313 [2024-04-17 10:29:00.510308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.313 [2024-04-17 10:29:00.520090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.313 [2024-04-17 10:29:00.520117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.313 [2024-04-17 10:29:00.520129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.313 [2024-04-17 10:29:00.529600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.313 [2024-04-17 10:29:00.529627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.313 [2024-04-17 10:29:00.529639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.313 [2024-04-17 10:29:00.538651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.313 [2024-04-17 10:29:00.538677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.313 [2024-04-17 10:29:00.538688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.313 [2024-04-17 10:29:00.547393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.313 [2024-04-17 10:29:00.547419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.313 [2024-04-17 10:29:00.547432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.314 [2024-04-17 10:29:00.555952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.314 [2024-04-17 10:29:00.555979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.314 [2024-04-17 10:29:00.555991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.314 [2024-04-17 10:29:00.564836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.314 [2024-04-17 10:29:00.564864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.314 [2024-04-17 10:29:00.564876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.314 [2024-04-17 10:29:00.572928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.314 [2024-04-17 10:29:00.572955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.314 [2024-04-17 10:29:00.572968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.314 [2024-04-17 10:29:00.580994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.314 [2024-04-17 10:29:00.581024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.314 [2024-04-17 10:29:00.581036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.314 [2024-04-17 10:29:00.589158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.314 [2024-04-17 10:29:00.589185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.314 [2024-04-17 10:29:00.589197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.314 [2024-04-17 10:29:00.597453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.314 [2024-04-17 10:29:00.597481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.314 [2024-04-17 10:29:00.597493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.314 [2024-04-17 10:29:00.606325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.314 [2024-04-17 10:29:00.606353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.314 [2024-04-17 10:29:00.606365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.314 [2024-04-17 10:29:00.615076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.314 [2024-04-17 10:29:00.615104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.314 [2024-04-17 10:29:00.615116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.314 [2024-04-17 10:29:00.623407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.314 [2024-04-17 10:29:00.623434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.314 [2024-04-17 10:29:00.623451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.314 [2024-04-17 10:29:00.631738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.314 [2024-04-17 10:29:00.631766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.314 [2024-04-17 10:29:00.631778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.314 [2024-04-17 10:29:00.640487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.314 [2024-04-17 10:29:00.640515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.314 [2024-04-17 10:29:00.640528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.574 [2024-04-17 10:29:00.649617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.574 [2024-04-17 10:29:00.649655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.574 [2024-04-17 10:29:00.649668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.574 [2024-04-17 10:29:00.658188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.574 [2024-04-17 10:29:00.658216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.574 [2024-04-17 10:29:00.658229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.574 [2024-04-17 10:29:00.667847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.574 [2024-04-17 10:29:00.667874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.574 [2024-04-17 10:29:00.667886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.574 [2024-04-17 10:29:00.677971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.574 [2024-04-17 10:29:00.677999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.574 [2024-04-17 10:29:00.678011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.574 [2024-04-17 10:29:00.687484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.574 [2024-04-17 10:29:00.687510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.574 [2024-04-17 10:29:00.687523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.574 [2024-04-17 10:29:00.696324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.574 [2024-04-17 10:29:00.696351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.574 [2024-04-17 10:29:00.696363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.574 [2024-04-17 10:29:00.704668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.574 [2024-04-17 10:29:00.704700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.574 [2024-04-17 10:29:00.704712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.574 [2024-04-17 10:29:00.713313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.574 [2024-04-17 10:29:00.713342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.574 [2024-04-17 10:29:00.713354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.574 [2024-04-17 10:29:00.721674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.574 [2024-04-17 10:29:00.721700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.574 [2024-04-17 10:29:00.721713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.574 [2024-04-17 10:29:00.730892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.574 [2024-04-17 10:29:00.730918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.574 [2024-04-17 10:29:00.730931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.574 [2024-04-17 10:29:00.739233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.574 [2024-04-17 10:29:00.739260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.574 [2024-04-17 10:29:00.739273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.574 [2024-04-17 10:29:00.747753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.574 [2024-04-17 10:29:00.747779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.574 [2024-04-17 10:29:00.747792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.574 [2024-04-17 10:29:00.756043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.574 [2024-04-17 10:29:00.756069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.574 [2024-04-17 10:29:00.756081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.574 [2024-04-17 10:29:00.764067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.574 [2024-04-17 10:29:00.764094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.574 [2024-04-17 10:29:00.764107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.574 [2024-04-17 10:29:00.771985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.574 [2024-04-17 10:29:00.772011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.574 [2024-04-17 10:29:00.772023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.574 [2024-04-17 10:29:00.780404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.574 [2024-04-17 10:29:00.780432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.574 [2024-04-17 10:29:00.780444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.574 [2024-04-17 10:29:00.788600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.574 [2024-04-17 10:29:00.788627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.574 [2024-04-17 10:29:00.788638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.574 [2024-04-17 10:29:00.796506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.574 [2024-04-17 10:29:00.796534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.574 [2024-04-17 10:29:00.796547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.574 [2024-04-17 10:29:00.804759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.574 [2024-04-17 10:29:00.804786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.574 [2024-04-17 10:29:00.804799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.574 [2024-04-17 10:29:00.813603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.574 [2024-04-17 10:29:00.813631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.574 [2024-04-17 10:29:00.813652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.574 [2024-04-17 10:29:00.821509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.574 [2024-04-17 10:29:00.821536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.574 [2024-04-17 10:29:00.821548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.574 [2024-04-17 10:29:00.829217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.574 [2024-04-17 10:29:00.829243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.574 [2024-04-17 10:29:00.829255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.574 [2024-04-17 10:29:00.837041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.574 [2024-04-17 10:29:00.837068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.574 [2024-04-17 10:29:00.837080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.574 [2024-04-17 10:29:00.844889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.575 [2024-04-17 10:29:00.844915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.575 [2024-04-17 10:29:00.844932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.575 [2024-04-17 10:29:00.852473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.575 [2024-04-17 10:29:00.852500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.575 [2024-04-17 10:29:00.852513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.575 [2024-04-17 10:29:00.860724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.575 [2024-04-17 10:29:00.860752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.575 [2024-04-17 10:29:00.860765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.575 [2024-04-17 10:29:00.868593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.575 [2024-04-17 10:29:00.868622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.575 [2024-04-17 10:29:00.868634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.575 [2024-04-17 10:29:00.876866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.575 [2024-04-17 10:29:00.876893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.575 [2024-04-17 10:29:00.876905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.575 [2024-04-17 10:29:00.885751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.575 [2024-04-17 10:29:00.885779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.575 [2024-04-17 10:29:00.885790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.575 [2024-04-17 10:29:00.893944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.575 [2024-04-17 10:29:00.893972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.575 [2024-04-17 10:29:00.893984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.575 [2024-04-17 10:29:00.901696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.575 [2024-04-17 10:29:00.901724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.575 [2024-04-17 10:29:00.901736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.834 [2024-04-17 10:29:00.909450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.834 [2024-04-17 10:29:00.909477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.834 [2024-04-17 10:29:00.909489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.834 [2024-04-17 10:29:00.916945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.834 [2024-04-17 10:29:00.916972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.834 [2024-04-17 10:29:00.916983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.834 [2024-04-17 10:29:00.924549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.834 [2024-04-17 10:29:00.924576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.834 [2024-04-17 10:29:00.924589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.834 [2024-04-17 10:29:00.932604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.834 [2024-04-17 10:29:00.932632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.834 [2024-04-17 10:29:00.932651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.834 [2024-04-17 10:29:00.940522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.834 [2024-04-17 10:29:00.940550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.834 [2024-04-17 10:29:00.940562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.834 [2024-04-17 10:29:00.948341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.834 [2024-04-17 10:29:00.948369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.834 [2024-04-17 10:29:00.948381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.834 [2024-04-17 10:29:00.956406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.834 [2024-04-17 10:29:00.956434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.834 [2024-04-17 10:29:00.956447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.834 [2024-04-17 10:29:00.964919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.834 [2024-04-17 10:29:00.964946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.834 [2024-04-17 10:29:00.964957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.834 [2024-04-17 10:29:00.973168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.834 [2024-04-17 10:29:00.973196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.834 [2024-04-17 10:29:00.973208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.834 [2024-04-17 10:29:00.982033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.834 [2024-04-17 10:29:00.982061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.834 [2024-04-17 10:29:00.982078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.834 [2024-04-17 10:29:00.990086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.834 [2024-04-17 10:29:00.990114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.834 [2024-04-17 10:29:00.990126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.834 [2024-04-17 10:29:00.998192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.834 [2024-04-17 10:29:00.998220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.834 [2024-04-17 10:29:00.998232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.834 [2024-04-17 10:29:01.006882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.834 [2024-04-17 10:29:01.006909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.834 [2024-04-17 10:29:01.006922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.834 [2024-04-17 10:29:01.015090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.834 [2024-04-17 10:29:01.015117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.834 [2024-04-17 10:29:01.015130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.834 [2024-04-17 10:29:01.023372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.834 [2024-04-17 10:29:01.023398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.834 [2024-04-17 10:29:01.023409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.834 [2024-04-17 10:29:01.031319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.834 [2024-04-17 10:29:01.031346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.834 [2024-04-17 10:29:01.031359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.834 [2024-04-17 10:29:01.039385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.834 [2024-04-17 10:29:01.039412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.834 [2024-04-17 10:29:01.039424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.834 [2024-04-17 10:29:01.048388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.834 [2024-04-17 10:29:01.048417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.834 [2024-04-17 10:29:01.048430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.835 [2024-04-17 10:29:01.057404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.835 [2024-04-17 10:29:01.057436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.835 [2024-04-17 10:29:01.057449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.835 [2024-04-17 10:29:01.067131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.835 [2024-04-17 10:29:01.067159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.835 [2024-04-17 10:29:01.067172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.835 [2024-04-17 10:29:01.077095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.835 [2024-04-17 10:29:01.077123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.835 [2024-04-17 10:29:01.077136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.835 [2024-04-17 10:29:01.087654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.835 [2024-04-17 10:29:01.087684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.835 [2024-04-17 10:29:01.087700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.835 [2024-04-17 10:29:01.098603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.835 [2024-04-17 10:29:01.098632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.835 [2024-04-17 10:29:01.098650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.835 [2024-04-17 10:29:01.109140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.835 [2024-04-17 10:29:01.109167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.835 [2024-04-17 10:29:01.109179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.835 [2024-04-17 10:29:01.120224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.835 [2024-04-17 10:29:01.120251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.835 [2024-04-17 10:29:01.120264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.835 [2024-04-17 10:29:01.129614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.835 [2024-04-17 10:29:01.129648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.835 [2024-04-17 10:29:01.129661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.835 [2024-04-17 10:29:01.140133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.835 [2024-04-17 10:29:01.140162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.835 [2024-04-17 10:29:01.140174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.835 [2024-04-17 10:29:01.150156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.835 [2024-04-17 10:29:01.150184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.835 [2024-04-17 10:29:01.150196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.835 [2024-04-17 10:29:01.160534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:27.835 [2024-04-17 10:29:01.160562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.835 [2024-04-17 10:29:01.160574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.094 [2024-04-17 10:29:01.170535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.094 [2024-04-17 10:29:01.170564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.094 [2024-04-17 10:29:01.170576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.094 [2024-04-17 10:29:01.180561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.094 [2024-04-17 10:29:01.180589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.094 [2024-04-17 10:29:01.180601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.094 [2024-04-17 10:29:01.190898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.094 [2024-04-17 10:29:01.190925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.094 [2024-04-17 10:29:01.190936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.094 [2024-04-17 10:29:01.200889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.095 [2024-04-17 10:29:01.200917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.095 [2024-04-17 10:29:01.200928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.095 [2024-04-17 10:29:01.210795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.095 [2024-04-17 10:29:01.210822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.095 [2024-04-17 10:29:01.210834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.095 [2024-04-17 10:29:01.220434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.095 [2024-04-17 10:29:01.220462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.095 [2024-04-17 10:29:01.220473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.095 [2024-04-17 10:29:01.230152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.095 [2024-04-17 10:29:01.230180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.095 [2024-04-17 10:29:01.230198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.095 [2024-04-17 10:29:01.240461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.095 [2024-04-17 10:29:01.240489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.095 [2024-04-17 10:29:01.240502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.095 [2024-04-17 10:29:01.249934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.095 [2024-04-17 10:29:01.249962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.095 [2024-04-17 10:29:01.249974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.095 [2024-04-17 10:29:01.259441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.095 [2024-04-17 10:29:01.259469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.095 [2024-04-17 10:29:01.259481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.095 [2024-04-17 10:29:01.269028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.095 [2024-04-17 10:29:01.269056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.095 [2024-04-17 10:29:01.269069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.095 [2024-04-17 10:29:01.277537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.095 [2024-04-17 10:29:01.277563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.095 [2024-04-17 10:29:01.277574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.095 [2024-04-17 10:29:01.285504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.095 [2024-04-17 10:29:01.285531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.095 [2024-04-17 10:29:01.285543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.095 [2024-04-17 10:29:01.293287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.095 [2024-04-17 10:29:01.293313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.095 [2024-04-17 10:29:01.293326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.095 [2024-04-17 10:29:01.300776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.095 [2024-04-17 10:29:01.300803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.095 [2024-04-17 10:29:01.300815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.095 [2024-04-17 10:29:01.308156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.095 [2024-04-17 10:29:01.308182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.095 [2024-04-17 10:29:01.308195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.095 [2024-04-17 10:29:01.315263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.095 [2024-04-17 10:29:01.315290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.095 [2024-04-17 10:29:01.315302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.095 [2024-04-17 10:29:01.322356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.095 [2024-04-17 10:29:01.322383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.095 [2024-04-17 10:29:01.322395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.095 [2024-04-17 10:29:01.329627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.095 [2024-04-17 10:29:01.329659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.095 [2024-04-17 10:29:01.329671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.095 [2024-04-17 10:29:01.337113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.095 [2024-04-17 10:29:01.337139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.095 [2024-04-17 10:29:01.337151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.095 [2024-04-17 10:29:01.344610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.095 [2024-04-17 10:29:01.344636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.095 [2024-04-17 10:29:01.344655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.095 [2024-04-17 10:29:01.352294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.095 [2024-04-17 10:29:01.352322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.095 [2024-04-17 10:29:01.352334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.095 [2024-04-17 10:29:01.359578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.095 [2024-04-17 10:29:01.359605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.095 [2024-04-17 10:29:01.359617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.095 [2024-04-17 10:29:01.367007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.095 [2024-04-17 10:29:01.367033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.095 [2024-04-17 10:29:01.367049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.095 [2024-04-17 10:29:01.374379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.095 [2024-04-17 10:29:01.374406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.095 [2024-04-17 10:29:01.374417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.095 [2024-04-17 10:29:01.381850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.095 [2024-04-17 10:29:01.381877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.095 [2024-04-17 10:29:01.381889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.095 [2024-04-17 10:29:01.389353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.095 [2024-04-17 10:29:01.389380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.095 [2024-04-17 10:29:01.389392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.095 [2024-04-17 10:29:01.396001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.095 [2024-04-17 10:29:01.396029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.095 [2024-04-17 10:29:01.396041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.095 [2024-04-17 10:29:01.402048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.095 [2024-04-17 10:29:01.402076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.095 [2024-04-17 10:29:01.402088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.095 [2024-04-17 10:29:01.407526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.095 [2024-04-17 10:29:01.407553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.095 [2024-04-17 10:29:01.407566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.095 [2024-04-17 10:29:01.412893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.096 [2024-04-17 10:29:01.412920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.096 [2024-04-17 10:29:01.412933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.096 [2024-04-17 10:29:01.418144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.096 [2024-04-17 10:29:01.418171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.096 [2024-04-17 10:29:01.418183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.096 [2024-04-17 10:29:01.423433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.096 [2024-04-17 10:29:01.423464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.096 [2024-04-17 10:29:01.423476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.355 [2024-04-17 10:29:01.428797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.355 [2024-04-17 10:29:01.428823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.355 [2024-04-17 10:29:01.428835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.355 [2024-04-17 10:29:01.434455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.355 [2024-04-17 10:29:01.434482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.355 [2024-04-17 10:29:01.434495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.355 [2024-04-17 10:29:01.441092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.355 [2024-04-17 10:29:01.441119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.355 [2024-04-17 10:29:01.441130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.355 [2024-04-17 10:29:01.447814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.355 [2024-04-17 10:29:01.447841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.355 [2024-04-17 10:29:01.447853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.355 [2024-04-17 10:29:01.454738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.355 [2024-04-17 10:29:01.454765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.355 [2024-04-17 10:29:01.454777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.355 [2024-04-17 10:29:01.461529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.356 [2024-04-17 10:29:01.461555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.356 [2024-04-17 10:29:01.461568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.356 [2024-04-17 10:29:01.468587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.356 [2024-04-17 10:29:01.468614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.356 [2024-04-17 10:29:01.468626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.356 [2024-04-17 10:29:01.475776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.356 [2024-04-17 10:29:01.475802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.356 [2024-04-17 10:29:01.475814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.356 [2024-04-17 10:29:01.482926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.356 [2024-04-17 10:29:01.482953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.356 [2024-04-17 10:29:01.482965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.356 [2024-04-17 10:29:01.490041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.356 [2024-04-17 10:29:01.490068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.356 [2024-04-17 10:29:01.490080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.356 [2024-04-17 10:29:01.497235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.356 [2024-04-17 10:29:01.497263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.356 [2024-04-17 10:29:01.497275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.356 [2024-04-17 10:29:01.504269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.356 [2024-04-17 10:29:01.504296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.356 [2024-04-17 10:29:01.504308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.356 [2024-04-17 10:29:01.511319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.356 [2024-04-17 10:29:01.511346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.356 [2024-04-17 10:29:01.511357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.356 [2024-04-17 10:29:01.518376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.356 [2024-04-17 10:29:01.518403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.356 [2024-04-17 10:29:01.518415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.356 [2024-04-17 10:29:01.525482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.356 [2024-04-17 10:29:01.525508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.356 [2024-04-17 10:29:01.525519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.356 [2024-04-17 10:29:01.532587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.356 [2024-04-17 10:29:01.532613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.356 [2024-04-17 10:29:01.532625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.356 [2024-04-17 10:29:01.539884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.356 [2024-04-17 10:29:01.539911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.356 [2024-04-17 10:29:01.539927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.356 [2024-04-17 10:29:01.547249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.356 [2024-04-17 10:29:01.547276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.356 [2024-04-17 10:29:01.547287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.356 [2024-04-17 10:29:01.554481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.356 [2024-04-17 10:29:01.554508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.356 [2024-04-17 10:29:01.554520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.356 [2024-04-17 10:29:01.561603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.356 [2024-04-17 10:29:01.561629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.356 [2024-04-17 10:29:01.561641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.356 [2024-04-17 10:29:01.568735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.356 [2024-04-17 10:29:01.568761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.356 [2024-04-17 10:29:01.568773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.356 [2024-04-17 10:29:01.575603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.356 [2024-04-17 10:29:01.575630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.356 [2024-04-17 10:29:01.575649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.356 [2024-04-17 10:29:01.582531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.356 [2024-04-17 10:29:01.582558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.356 [2024-04-17 10:29:01.582570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.356 [2024-04-17 10:29:01.589404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.356 [2024-04-17 10:29:01.589430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.356 [2024-04-17 10:29:01.589442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.356 [2024-04-17 10:29:01.596387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.356 [2024-04-17 10:29:01.596414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.356 [2024-04-17 10:29:01.596426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.356 [2024-04-17 10:29:01.603352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.356 [2024-04-17 10:29:01.603386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.356 [2024-04-17 10:29:01.603398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.356 [2024-04-17 10:29:01.610301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.356 [2024-04-17 10:29:01.610328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.356 [2024-04-17 10:29:01.610340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.356 [2024-04-17 10:29:01.617471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.356 [2024-04-17 10:29:01.617497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.356 [2024-04-17 10:29:01.617509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.356 [2024-04-17 10:29:01.624812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.356 [2024-04-17 10:29:01.624838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.356 [2024-04-17 10:29:01.624850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.356 [2024-04-17 10:29:01.632204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.356 [2024-04-17 10:29:01.632230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.356 [2024-04-17 10:29:01.632242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.356 [2024-04-17 10:29:01.639504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.356 [2024-04-17 10:29:01.639531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.356 [2024-04-17 10:29:01.639544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.356 [2024-04-17 10:29:01.646700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.356 [2024-04-17 10:29:01.646725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.356 [2024-04-17 10:29:01.646737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.357 [2024-04-17 10:29:01.653957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.357 [2024-04-17 10:29:01.653984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.357 [2024-04-17 10:29:01.653995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.357 [2024-04-17 10:29:01.661465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.357 [2024-04-17 10:29:01.661491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.357 [2024-04-17 10:29:01.661503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.357 [2024-04-17 10:29:01.668922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.357 [2024-04-17 10:29:01.668950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.357 [2024-04-17 10:29:01.668962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.357 [2024-04-17 10:29:01.676222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.357 [2024-04-17 10:29:01.676250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.357 [2024-04-17 10:29:01.676262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.357 [2024-04-17 10:29:01.683395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.357 [2024-04-17 10:29:01.683422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.357 [2024-04-17 10:29:01.683434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.616 [2024-04-17 10:29:01.690303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.616 [2024-04-17 10:29:01.690329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.616 [2024-04-17 10:29:01.690340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.616 [2024-04-17 10:29:01.697366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.616 [2024-04-17 10:29:01.697394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.616 [2024-04-17 10:29:01.697405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.616 [2024-04-17 10:29:01.704221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.616 [2024-04-17 10:29:01.704247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.616 [2024-04-17 10:29:01.704259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.616 [2024-04-17 10:29:01.711329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.616 [2024-04-17 10:29:01.711356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.616 [2024-04-17 10:29:01.711368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.616 [2024-04-17 10:29:01.718564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.616 [2024-04-17 10:29:01.718589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.616 [2024-04-17 10:29:01.718601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.616 [2024-04-17 10:29:01.725831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.616 [2024-04-17 10:29:01.725862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.616 [2024-04-17 10:29:01.725874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.616 [2024-04-17 10:29:01.733259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.616 [2024-04-17 10:29:01.733286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.616 [2024-04-17 10:29:01.733298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.616 [2024-04-17 10:29:01.740703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.616 [2024-04-17 10:29:01.740729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.616 [2024-04-17 10:29:01.740740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.616 [2024-04-17 10:29:01.748050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.616 [2024-04-17 10:29:01.748076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.616 [2024-04-17 10:29:01.748088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.616 [2024-04-17 10:29:01.755322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.616 [2024-04-17 10:29:01.755348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.616 [2024-04-17 10:29:01.755361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.616 [2024-04-17 10:29:01.762519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.616 [2024-04-17 10:29:01.762546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.616 [2024-04-17 10:29:01.762557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.616 [2024-04-17 10:29:01.769846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.616 [2024-04-17 10:29:01.769872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.616 [2024-04-17 10:29:01.769884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.616 [2024-04-17 10:29:01.777172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.616 [2024-04-17 10:29:01.777199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.616 [2024-04-17 10:29:01.777210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.616 [2024-04-17 10:29:01.784579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.616 [2024-04-17 10:29:01.784606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.616 [2024-04-17 10:29:01.784617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.616 [2024-04-17 10:29:01.791867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.616 [2024-04-17 10:29:01.791893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.616 [2024-04-17 10:29:01.791905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.616 [2024-04-17 10:29:01.798913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.617 [2024-04-17 10:29:01.798940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.617 [2024-04-17 10:29:01.798952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.617 [2024-04-17 10:29:01.805817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.617 [2024-04-17 10:29:01.805843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.617 [2024-04-17 10:29:01.805857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.617 [2024-04-17 10:29:01.812816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.617 [2024-04-17 10:29:01.812843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.617 [2024-04-17 10:29:01.812856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.617 [2024-04-17 10:29:01.819900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.617 [2024-04-17 10:29:01.819928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.617 [2024-04-17 10:29:01.819939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.617 [2024-04-17 10:29:01.827000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.617 [2024-04-17 10:29:01.827027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.617 [2024-04-17 10:29:01.827039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.617 [2024-04-17 10:29:01.834277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.617 [2024-04-17 10:29:01.834304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.617 [2024-04-17 10:29:01.834316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.617 [2024-04-17 10:29:01.841690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.617 [2024-04-17 10:29:01.841717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.617 [2024-04-17 10:29:01.841729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.617 [2024-04-17 10:29:01.849017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.617 [2024-04-17 10:29:01.849044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.617 [2024-04-17 10:29:01.849061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.617 [2024-04-17 10:29:01.856176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.617 [2024-04-17 10:29:01.856203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.617 [2024-04-17 10:29:01.856215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.617 [2024-04-17 10:29:01.863211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.617 [2024-04-17 10:29:01.863238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.617 [2024-04-17 10:29:01.863250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.617 [2024-04-17 10:29:01.870216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.617 [2024-04-17 10:29:01.870242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.617 [2024-04-17 10:29:01.870254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.617 [2024-04-17 10:29:01.877384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.617 [2024-04-17 10:29:01.877411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.617 [2024-04-17 10:29:01.877423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.617 [2024-04-17 10:29:01.884755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.617 [2024-04-17 10:29:01.884781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.617 [2024-04-17 10:29:01.884793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.617 [2024-04-17 10:29:01.892131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.617 [2024-04-17 10:29:01.892158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.617 [2024-04-17 10:29:01.892170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.617 [2024-04-17 10:29:01.899555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.617 [2024-04-17 10:29:01.899582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.617 [2024-04-17 10:29:01.899594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.617 [2024-04-17 10:29:01.906980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.617 [2024-04-17 10:29:01.907007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.617 [2024-04-17 10:29:01.907018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.617 [2024-04-17 10:29:01.914251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.617 [2024-04-17 10:29:01.914281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.617 [2024-04-17 10:29:01.914293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.617 [2024-04-17 10:29:01.921619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.617 [2024-04-17 10:29:01.921653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.617 [2024-04-17 10:29:01.921666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.617 [2024-04-17 10:29:01.929185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.617 [2024-04-17 10:29:01.929211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.617 [2024-04-17 10:29:01.929223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.617 [2024-04-17 10:29:01.936819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.617 [2024-04-17 10:29:01.936846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.617 [2024-04-17 10:29:01.936859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.617 [2024-04-17 10:29:01.944241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.617 [2024-04-17 10:29:01.944267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.617 [2024-04-17 10:29:01.944279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.876 [2024-04-17 10:29:01.951626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.876 [2024-04-17 10:29:01.951658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.876 [2024-04-17 10:29:01.951671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.876 [2024-04-17 10:29:01.958951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.876 [2024-04-17 10:29:01.958978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.876 [2024-04-17 10:29:01.958990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.876 [2024-04-17 10:29:01.966357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.876 [2024-04-17 10:29:01.966383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.876 [2024-04-17 10:29:01.966394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.876 [2024-04-17 10:29:01.973974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.876 [2024-04-17 10:29:01.974000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.877 [2024-04-17 10:29:01.974012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.877 [2024-04-17 10:29:01.981495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.877 [2024-04-17 10:29:01.981521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.877 [2024-04-17 10:29:01.981534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.877 [2024-04-17 10:29:01.988914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.877 [2024-04-17 10:29:01.988940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.877 [2024-04-17 10:29:01.988951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.877 [2024-04-17 10:29:01.996352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.877 [2024-04-17 10:29:01.996379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.877 [2024-04-17 10:29:01.996391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.877 [2024-04-17 10:29:02.004024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.877 [2024-04-17 10:29:02.004051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.877 [2024-04-17 10:29:02.004063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.877 [2024-04-17 10:29:02.011561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.877 [2024-04-17 10:29:02.011587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.877 [2024-04-17 10:29:02.011598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.877 [2024-04-17 10:29:02.018978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.877 [2024-04-17 10:29:02.019005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.877 [2024-04-17 10:29:02.019017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.877 [2024-04-17 10:29:02.026217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.877 [2024-04-17 10:29:02.026244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.877 [2024-04-17 10:29:02.026255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.877 [2024-04-17 10:29:02.033696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.877 [2024-04-17 10:29:02.033722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.877 [2024-04-17 10:29:02.033734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.877 [2024-04-17 10:29:02.041085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.877 [2024-04-17 10:29:02.041110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.877 [2024-04-17 10:29:02.041126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.877 [2024-04-17 10:29:02.048693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.877 [2024-04-17 10:29:02.048719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.877 [2024-04-17 10:29:02.048731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.877 [2024-04-17 10:29:02.055914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.877 [2024-04-17 10:29:02.055940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.877 [2024-04-17 10:29:02.055952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.877 [2024-04-17 10:29:02.062835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x887820) 00:32:28.877 [2024-04-17 10:29:02.062862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.877 [2024-04-17 10:29:02.062874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.877 00:32:28.877 Latency(us) 00:32:28.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.877 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:28.877 nvme0n1 : 2.00 3839.07 479.88 0.00 0.00 4163.25 2323.55 11200.70 00:32:28.877 =================================================================================================================== 00:32:28.877 Total : 3839.07 479.88 0.00 0.00 4163.25 2323.55 11200.70 00:32:28.877 0 00:32:28.877 10:29:02 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:28.877 10:29:02 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:28.877 10:29:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:28.877 10:29:02 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:28.877 | .driver_specific 00:32:28.877 | .nvme_error 00:32:28.877 | .status_code 00:32:28.877 | .command_transient_transport_error' 00:32:29.136 10:29:02 -- host/digest.sh@71 -- # (( 248 > 0 )) 00:32:29.136 10:29:02 -- host/digest.sh@73 -- # killprocess 3643205 00:32:29.136 10:29:02 -- common/autotest_common.sh@926 -- # '[' -z 3643205 ']' 00:32:29.136 10:29:02 -- common/autotest_common.sh@930 -- # kill -0 3643205 00:32:29.136 10:29:02 -- common/autotest_common.sh@931 -- # uname 00:32:29.136 10:29:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:29.136 10:29:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3643205 00:32:29.136 10:29:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:29.136 10:29:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:29.136 10:29:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3643205' 00:32:29.136 killing process with pid 3643205 00:32:29.136 10:29:02 -- common/autotest_common.sh@945 -- # kill 3643205 00:32:29.136 Received shutdown signal, test time was about 2.000000 seconds 00:32:29.136 00:32:29.136 Latency(us) 00:32:29.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.136 =================================================================================================================== 00:32:29.136 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:29.136 10:29:02 -- common/autotest_common.sh@950 -- # wait 3643205 00:32:29.395 10:29:02 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:32:29.395 10:29:02 -- host/digest.sh@54 -- # local rw bs qd 00:32:29.395 10:29:02 -- host/digest.sh@56 -- # rw=randwrite 00:32:29.395 10:29:02 -- host/digest.sh@56 -- # bs=4096 00:32:29.395 10:29:02 -- host/digest.sh@56 -- # qd=128 00:32:29.395 10:29:02 -- host/digest.sh@58 -- # bperfpid=3644009 00:32:29.395 10:29:02 -- host/digest.sh@60 -- # waitforlisten 3644009 /var/tmp/bperf.sock 00:32:29.395 10:29:02 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:29.395 10:29:02 -- common/autotest_common.sh@819 -- # '[' -z 3644009 ']' 00:32:29.395 10:29:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:29.395 10:29:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:29.395 10:29:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:29.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:29.395 10:29:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:29.395 10:29:02 -- common/autotest_common.sh@10 -- # set +x 00:32:29.395 [2024-04-17 10:29:02.641522] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:29.395 [2024-04-17 10:29:02.641583] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3644009 ] 00:32:29.395 EAL: No free 2048 kB hugepages reported on node 1 00:32:29.395 [2024-04-17 10:29:02.714701] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.654 [2024-04-17 10:29:02.802925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:30.589 10:29:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:30.589 10:29:03 -- common/autotest_common.sh@852 -- # return 0 00:32:30.589 10:29:03 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:30.589 10:29:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:30.589 10:29:03 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:30.589 10:29:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:30.589 10:29:03 -- common/autotest_common.sh@10 -- # set +x 00:32:30.589 10:29:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:30.589 10:29:03 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:30.589 10:29:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:30.847 nvme0n1 00:32:30.847 10:29:04 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:30.847 10:29:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:30.847 10:29:04 -- common/autotest_common.sh@10 -- # set +x 00:32:30.847 10:29:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:30.847 10:29:04 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:30.848 10:29:04 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:31.108 Running I/O for 2 seconds... 00:32:31.108 [2024-04-17 10:29:04.257431] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190edd58 00:32:31.108 [2024-04-17 10:29:04.258882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.108 [2024-04-17 10:29:04.258920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:31.108 [2024-04-17 10:29:04.271122] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e99d8 00:32:31.108 [2024-04-17 10:29:04.272033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.108 [2024-04-17 10:29:04.272066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:31.108 [2024-04-17 10:29:04.284532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f2948 00:32:31.108 [2024-04-17 10:29:04.285153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.108 [2024-04-17 10:29:04.285180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:31.108 [2024-04-17 10:29:04.297998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e95a0 00:32:31.108 [2024-04-17 10:29:04.298570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.108 [2024-04-17 10:29:04.298595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:31.108 [2024-04-17 10:29:04.311488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190ed0b0 00:32:31.108 [2024-04-17 10:29:04.312035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.108 [2024-04-17 10:29:04.312060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:31.108 [2024-04-17 10:29:04.324979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f4b08 00:32:31.108 [2024-04-17 10:29:04.325479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.108 [2024-04-17 10:29:04.325505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:31.108 [2024-04-17 10:29:04.338411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e6b70 00:32:31.109 [2024-04-17 10:29:04.338884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.109 [2024-04-17 10:29:04.338910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:31.109 [2024-04-17 10:29:04.352288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e7818 00:32:31.109 [2024-04-17 10:29:04.352726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.109 [2024-04-17 10:29:04.352751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:31.109 [2024-04-17 10:29:04.365763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e6b70 00:32:31.109 [2024-04-17 10:29:04.366151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.109 [2024-04-17 10:29:04.366175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:31.109 [2024-04-17 10:29:04.379188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f4b08 00:32:31.109 [2024-04-17 10:29:04.379549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.109 [2024-04-17 10:29:04.379574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:31.109 [2024-04-17 10:29:04.392612] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190ed0b0 00:32:31.109 [2024-04-17 10:29:04.393083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.109 [2024-04-17 10:29:04.393108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:31.109 [2024-04-17 10:29:04.406055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f3a28 00:32:31.109 [2024-04-17 10:29:04.406472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.109 [2024-04-17 10:29:04.406498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:31.109 [2024-04-17 10:29:04.419499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f2510 00:32:31.109 [2024-04-17 10:29:04.419877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.109 [2024-04-17 10:29:04.419903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:31.109 [2024-04-17 10:29:04.432962] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e0a68 00:32:31.109 [2024-04-17 10:29:04.433353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.109 [2024-04-17 10:29:04.433378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:31.373 [2024-04-17 10:29:04.449075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f4298 00:32:31.373 [2024-04-17 10:29:04.450096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.373 [2024-04-17 10:29:04.450123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.373 [2024-04-17 10:29:04.460747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190eaef0 00:32:31.373 [2024-04-17 10:29:04.462276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.373 [2024-04-17 10:29:04.462301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.373 [2024-04-17 10:29:04.473160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f35f0 00:32:31.373 [2024-04-17 10:29:04.474488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.373 [2024-04-17 10:29:04.474512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:31.373 [2024-04-17 10:29:04.486625] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f35f0 00:32:31.373 [2024-04-17 10:29:04.487953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.373 [2024-04-17 10:29:04.487978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:31.373 [2024-04-17 10:29:04.500443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190ea248 00:32:31.373 [2024-04-17 10:29:04.501667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.373 [2024-04-17 10:29:04.501692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:31.373 [2024-04-17 10:29:04.513911] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190edd58 00:32:31.373 [2024-04-17 10:29:04.515144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.373 [2024-04-17 10:29:04.515168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:31.373 [2024-04-17 10:29:04.527372] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190ef270 00:32:31.373 [2024-04-17 10:29:04.528623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.373 [2024-04-17 10:29:04.528653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:31.373 [2024-04-17 10:29:04.540872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e0630 00:32:31.373 [2024-04-17 10:29:04.542163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.373 [2024-04-17 10:29:04.542188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:31.373 [2024-04-17 10:29:04.554362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e23b8 00:32:31.373 [2024-04-17 10:29:04.555640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.373 [2024-04-17 10:29:04.555671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:31.373 [2024-04-17 10:29:04.567901] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e99d8 00:32:31.373 [2024-04-17 10:29:04.569196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.373 [2024-04-17 10:29:04.569220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:31.373 [2024-04-17 10:29:04.581532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190eee38 00:32:31.373 [2024-04-17 10:29:04.582857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.373 [2024-04-17 10:29:04.582881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:31.373 [2024-04-17 10:29:04.595343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e8d30 00:32:31.373 [2024-04-17 10:29:04.596210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.373 [2024-04-17 10:29:04.596235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:31.373 [2024-04-17 10:29:04.608788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e88f8 00:32:31.373 [2024-04-17 10:29:04.609274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.373 [2024-04-17 10:29:04.609300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:31.373 [2024-04-17 10:29:04.622195] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190df988 00:32:31.373 [2024-04-17 10:29:04.622636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.373 [2024-04-17 10:29:04.622671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:31.373 [2024-04-17 10:29:04.635618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190ea248 00:32:31.373 [2024-04-17 10:29:04.636033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.373 [2024-04-17 10:29:04.636058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:31.373 [2024-04-17 10:29:04.649049] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190ea680 00:32:31.373 [2024-04-17 10:29:04.649410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.373 [2024-04-17 10:29:04.649435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:31.373 [2024-04-17 10:29:04.662469] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190eff18 00:32:31.373 [2024-04-17 10:29:04.662807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.373 [2024-04-17 10:29:04.662832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:31.373 [2024-04-17 10:29:04.675917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e0ea0 00:32:31.373 [2024-04-17 10:29:04.676297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.373 [2024-04-17 10:29:04.676321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:31.373 [2024-04-17 10:29:04.689518] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190ef270 00:32:31.373 [2024-04-17 10:29:04.689817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.373 [2024-04-17 10:29:04.689841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:31.633 [2024-04-17 10:29:04.705299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190eff18 00:32:31.633 [2024-04-17 10:29:04.707292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.633 [2024-04-17 10:29:04.707317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.633 [2024-04-17 10:29:04.718778] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e12d8 00:32:31.633 [2024-04-17 10:29:04.720785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.633 [2024-04-17 10:29:04.720810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:31.633 [2024-04-17 10:29:04.732287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e4de8 00:32:31.633 [2024-04-17 10:29:04.734319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.633 [2024-04-17 10:29:04.734344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:31.634 [2024-04-17 10:29:04.744369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e3d08 00:32:31.634 [2024-04-17 10:29:04.745663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.634 [2024-04-17 10:29:04.745688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:31.634 [2024-04-17 10:29:04.757711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e3d08 00:32:31.634 [2024-04-17 10:29:04.759190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.634 [2024-04-17 10:29:04.759215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:31.634 [2024-04-17 10:29:04.771147] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e3d08 00:32:31.634 [2024-04-17 10:29:04.772748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.634 [2024-04-17 10:29:04.772772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:31.634 [2024-04-17 10:29:04.784553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f20d8 00:32:31.634 [2024-04-17 10:29:04.786258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.634 [2024-04-17 10:29:04.786281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:31.634 [2024-04-17 10:29:04.798079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e23b8 00:32:31.634 [2024-04-17 10:29:04.799819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.634 [2024-04-17 10:29:04.799844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.634 [2024-04-17 10:29:04.810586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f0ff8 00:32:31.634 [2024-04-17 10:29:04.811889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.634 [2024-04-17 10:29:04.811919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:31.634 [2024-04-17 10:29:04.824902] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190fac10 00:32:31.634 [2024-04-17 10:29:04.827196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.634 [2024-04-17 10:29:04.827220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:31.634 [2024-04-17 10:29:04.837543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f7970 00:32:31.634 [2024-04-17 10:29:04.838226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.634 [2024-04-17 10:29:04.838251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:31.634 [2024-04-17 10:29:04.852096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f7100 00:32:31.634 [2024-04-17 10:29:04.853410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.634 [2024-04-17 10:29:04.853436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:31.634 [2024-04-17 10:29:04.865403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e84c0 00:32:31.634 [2024-04-17 10:29:04.865925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.634 [2024-04-17 10:29:04.865949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:31.634 [2024-04-17 10:29:04.878652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190ebb98 00:32:31.634 [2024-04-17 10:29:04.880128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.634 [2024-04-17 10:29:04.880153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:31.634 [2024-04-17 10:29:04.892972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e4578 00:32:31.634 [2024-04-17 10:29:04.894528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.634 [2024-04-17 10:29:04.894552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:31.634 [2024-04-17 10:29:04.906714] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190feb58 00:32:31.634 [2024-04-17 10:29:04.907800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.634 [2024-04-17 10:29:04.907824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:31.634 [2024-04-17 10:29:04.920122] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f1ca0 00:32:31.634 [2024-04-17 10:29:04.920822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.634 [2024-04-17 10:29:04.920846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:31.634 [2024-04-17 10:29:04.933726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f2510 00:32:31.634 [2024-04-17 10:29:04.934385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.634 [2024-04-17 10:29:04.934411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:31.634 [2024-04-17 10:29:04.947156] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190fb8b8 00:32:31.634 [2024-04-17 10:29:04.947775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.634 [2024-04-17 10:29:04.947801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:31.634 [2024-04-17 10:29:04.960563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190ec840 00:32:31.634 [2024-04-17 10:29:04.961175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.634 [2024-04-17 10:29:04.961199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:31.894 [2024-04-17 10:29:04.973957] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e27f0 00:32:31.894 [2024-04-17 10:29:04.974513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.894 [2024-04-17 10:29:04.974542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.894 [2024-04-17 10:29:04.987379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190efae0 00:32:31.894 [2024-04-17 10:29:04.987923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.894 [2024-04-17 10:29:04.987948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:31.894 [2024-04-17 10:29:05.000971] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e4de8 00:32:31.894 [2024-04-17 10:29:05.001491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.894 [2024-04-17 10:29:05.001516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:31.894 [2024-04-17 10:29:05.014391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190fb8b8 00:32:31.894 [2024-04-17 10:29:05.014815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.894 [2024-04-17 10:29:05.014840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:31.894 [2024-04-17 10:29:05.027847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f3e60 00:32:31.894 [2024-04-17 10:29:05.028329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.894 [2024-04-17 10:29:05.028354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:31.894 [2024-04-17 10:29:05.041264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e6b70 00:32:31.894 [2024-04-17 10:29:05.041785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.894 [2024-04-17 10:29:05.041810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:31.894 [2024-04-17 10:29:05.054741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e6fa8 00:32:31.894 [2024-04-17 10:29:05.055195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.894 [2024-04-17 10:29:05.055220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:31.894 [2024-04-17 10:29:05.068138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f3e60 00:32:31.894 [2024-04-17 10:29:05.068653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.894 [2024-04-17 10:29:05.068678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:31.894 [2024-04-17 10:29:05.081368] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e38d0 00:32:31.894 [2024-04-17 10:29:05.082903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.894 [2024-04-17 10:29:05.082927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:31.894 [2024-04-17 10:29:05.094830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190fef90 00:32:31.894 [2024-04-17 10:29:05.096068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.894 [2024-04-17 10:29:05.096092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:31.894 [2024-04-17 10:29:05.108315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190fef90 00:32:31.894 [2024-04-17 10:29:05.109577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.894 [2024-04-17 10:29:05.109601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:31.894 [2024-04-17 10:29:05.122812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e27f0 00:32:31.894 [2024-04-17 10:29:05.123366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.894 [2024-04-17 10:29:05.123389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:31.894 [2024-04-17 10:29:05.136476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190feb58 00:32:31.894 [2024-04-17 10:29:05.137260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.894 [2024-04-17 10:29:05.137286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:31.894 [2024-04-17 10:29:05.149927] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190dece0 00:32:31.894 [2024-04-17 10:29:05.150675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.894 [2024-04-17 10:29:05.150701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:31.894 [2024-04-17 10:29:05.163346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190ed0b0 00:32:31.894 [2024-04-17 10:29:05.164061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.894 [2024-04-17 10:29:05.164085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:31.894 [2024-04-17 10:29:05.176783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190fcdd0 00:32:31.894 [2024-04-17 10:29:05.177453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.894 [2024-04-17 10:29:05.177478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:31.894 [2024-04-17 10:29:05.190206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e0ea0 00:32:31.894 [2024-04-17 10:29:05.190834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.894 [2024-04-17 10:29:05.190859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:31.894 [2024-04-17 10:29:05.203677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190eb760 00:32:31.894 [2024-04-17 10:29:05.204263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.894 [2024-04-17 10:29:05.204288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:31.894 [2024-04-17 10:29:05.217292] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190eee38 00:32:31.894 [2024-04-17 10:29:05.218137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:31.894 [2024-04-17 10:29:05.218162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:32.154 [2024-04-17 10:29:05.230528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e6b70 00:32:32.154 [2024-04-17 10:29:05.232034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.154 [2024-04-17 10:29:05.232059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:32.154 [2024-04-17 10:29:05.244039] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f0bc0 00:32:32.154 [2024-04-17 10:29:05.245550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.154 [2024-04-17 10:29:05.245575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:32.154 [2024-04-17 10:29:05.257528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e5658 00:32:32.154 [2024-04-17 10:29:05.259134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.154 [2024-04-17 10:29:05.259159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:32.154 [2024-04-17 10:29:05.271412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f0350 00:32:32.154 [2024-04-17 10:29:05.272961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.154 [2024-04-17 10:29:05.272986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:32.154 [2024-04-17 10:29:05.284931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f2510 00:32:32.154 [2024-04-17 10:29:05.286452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.154 [2024-04-17 10:29:05.286477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:32.154 [2024-04-17 10:29:05.298400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190fd208 00:32:32.154 [2024-04-17 10:29:05.299970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.154 [2024-04-17 10:29:05.299995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:32.154 [2024-04-17 10:29:05.311902] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e38d0 00:32:32.154 [2024-04-17 10:29:05.313390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.154 [2024-04-17 10:29:05.313415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.154 [2024-04-17 10:29:05.325402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f0bc0 00:32:32.154 [2024-04-17 10:29:05.327009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.154 [2024-04-17 10:29:05.327038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:32.154 [2024-04-17 10:29:05.338929] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190ec840 00:32:32.154 [2024-04-17 10:29:05.340538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.154 [2024-04-17 10:29:05.340562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:32.154 [2024-04-17 10:29:05.352708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190ec408 00:32:32.154 [2024-04-17 10:29:05.354309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.154 [2024-04-17 10:29:05.354335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:32.154 [2024-04-17 10:29:05.366584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190eb760 00:32:32.154 [2024-04-17 10:29:05.367673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.154 [2024-04-17 10:29:05.367698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:32.154 [2024-04-17 10:29:05.380084] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190fc128 00:32:32.154 [2024-04-17 10:29:05.381166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.154 [2024-04-17 10:29:05.381191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:32.154 [2024-04-17 10:29:05.393527] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e3d08 00:32:32.154 [2024-04-17 10:29:05.394298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.154 [2024-04-17 10:29:05.394323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:32.154 [2024-04-17 10:29:05.406961] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f46d0 00:32:32.154 [2024-04-17 10:29:05.407701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.154 [2024-04-17 10:29:05.407726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:32.154 [2024-04-17 10:29:05.420555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e3d08 00:32:32.154 [2024-04-17 10:29:05.421255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.154 [2024-04-17 10:29:05.421279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:32.154 [2024-04-17 10:29:05.433826] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e27f0 00:32:32.154 [2024-04-17 10:29:05.434479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.154 [2024-04-17 10:29:05.434503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:32.154 [2024-04-17 10:29:05.447250] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190fef90 00:32:32.154 [2024-04-17 10:29:05.447869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.154 [2024-04-17 10:29:05.447894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:32.154 [2024-04-17 10:29:05.460903] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f0bc0 00:32:32.154 [2024-04-17 10:29:05.461478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.154 [2024-04-17 10:29:05.461503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:32.154 [2024-04-17 10:29:05.474340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e6b70 00:32:32.154 [2024-04-17 10:29:05.474886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.154 [2024-04-17 10:29:05.474911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:32.414 [2024-04-17 10:29:05.487790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190fd208 00:32:32.414 [2024-04-17 10:29:05.488300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.414 [2024-04-17 10:29:05.488324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:32.414 [2024-04-17 10:29:05.501263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e3d08 00:32:32.414 [2024-04-17 10:29:05.501796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.414 [2024-04-17 10:29:05.501821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:32.414 [2024-04-17 10:29:05.514732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f4b08 00:32:32.414 [2024-04-17 10:29:05.515240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.414 [2024-04-17 10:29:05.515265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:32.414 [2024-04-17 10:29:05.528202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190ed4e8 00:32:32.414 [2024-04-17 10:29:05.528767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.414 [2024-04-17 10:29:05.528792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:32.414 [2024-04-17 10:29:05.541652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e95a0 00:32:32.414 [2024-04-17 10:29:05.542280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.414 [2024-04-17 10:29:05.542305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:32.414 [2024-04-17 10:29:05.554933] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190fdeb0 00:32:32.414 [2024-04-17 10:29:05.556548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.414 [2024-04-17 10:29:05.556574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:32.414 [2024-04-17 10:29:05.569361] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190ebfd0 00:32:32.414 [2024-04-17 10:29:05.570004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.414 [2024-04-17 10:29:05.570029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:32.414 [2024-04-17 10:29:05.582980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e38d0 00:32:32.414 [2024-04-17 10:29:05.583843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.414 [2024-04-17 10:29:05.583868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:32.414 [2024-04-17 10:29:05.596525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e3d08 00:32:32.414 [2024-04-17 10:29:05.597359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.414 [2024-04-17 10:29:05.597384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:32.414 [2024-04-17 10:29:05.609954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190fb480 00:32:32.414 [2024-04-17 10:29:05.610745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.414 [2024-04-17 10:29:05.610770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:32.414 [2024-04-17 10:29:05.623369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190ff3c8 00:32:32.414 [2024-04-17 10:29:05.624126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.414 [2024-04-17 10:29:05.624151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:32.414 [2024-04-17 10:29:05.636797] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190fcdd0 00:32:32.414 [2024-04-17 10:29:05.637509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.414 [2024-04-17 10:29:05.637533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:32.414 [2024-04-17 10:29:05.651089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f0bc0 00:32:32.414 [2024-04-17 10:29:05.652057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.414 [2024-04-17 10:29:05.652082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:32.414 [2024-04-17 10:29:05.664389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190fc128 00:32:32.414 [2024-04-17 10:29:05.665306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.414 [2024-04-17 10:29:05.665333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:32.414 [2024-04-17 10:29:05.677793] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190de470 00:32:32.414 [2024-04-17 10:29:05.678678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.414 [2024-04-17 10:29:05.678707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:32.414 [2024-04-17 10:29:05.691234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190eee38 00:32:32.414 [2024-04-17 10:29:05.692068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.414 [2024-04-17 10:29:05.692092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:32.414 [2024-04-17 10:29:05.704816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190de8a8 00:32:32.414 [2024-04-17 10:29:05.705616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.414 [2024-04-17 10:29:05.705642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:32.414 [2024-04-17 10:29:05.718260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e01f8 00:32:32.414 [2024-04-17 10:29:05.719024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.414 [2024-04-17 10:29:05.719049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.414 [2024-04-17 10:29:05.731701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190eff18 00:32:32.414 [2024-04-17 10:29:05.732418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.414 [2024-04-17 10:29:05.732442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:32.414 [2024-04-17 10:29:05.745137] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f0350 00:32:32.673 [2024-04-17 10:29:05.745953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.674 [2024-04-17 10:29:05.745978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:32.674 [2024-04-17 10:29:05.758602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190df550 00:32:32.674 [2024-04-17 10:29:05.759435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.674 [2024-04-17 10:29:05.759460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:32.674 [2024-04-17 10:29:05.772040] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190fda78 00:32:32.674 [2024-04-17 10:29:05.772838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.674 [2024-04-17 10:29:05.772864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:32.674 [2024-04-17 10:29:05.785456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190eee38 00:32:32.674 [2024-04-17 10:29:05.786263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.674 [2024-04-17 10:29:05.786288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:32.674 [2024-04-17 10:29:05.798739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190eb328 00:32:32.674 [2024-04-17 10:29:05.800540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.674 [2024-04-17 10:29:05.800568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:32.674 [2024-04-17 10:29:05.812204] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f46d0 00:32:32.674 [2024-04-17 10:29:05.813794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.674 [2024-04-17 10:29:05.813818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:32.674 [2024-04-17 10:29:05.825708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190fd208 00:32:32.674 [2024-04-17 10:29:05.827287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.674 [2024-04-17 10:29:05.827312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:32.674 [2024-04-17 10:29:05.839186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190ff3c8 00:32:32.674 [2024-04-17 10:29:05.840795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.674 [2024-04-17 10:29:05.840820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:32.674 [2024-04-17 10:29:05.852673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190fb048 00:32:32.674 [2024-04-17 10:29:05.854216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.674 [2024-04-17 10:29:05.854240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:32.674 [2024-04-17 10:29:05.866176] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190fc560 00:32:32.674 [2024-04-17 10:29:05.867811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.674 [2024-04-17 10:29:05.867835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:32.674 [2024-04-17 10:29:05.879708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190de470 00:32:32.674 [2024-04-17 10:29:05.881365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.674 [2024-04-17 10:29:05.881389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:32.674 [2024-04-17 10:29:05.893192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e2c28 00:32:32.674 [2024-04-17 10:29:05.894867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.674 [2024-04-17 10:29:05.894892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:32.674 [2024-04-17 10:29:05.906672] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f96f8 00:32:32.674 [2024-04-17 10:29:05.908372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.674 [2024-04-17 10:29:05.908397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:32.674 [2024-04-17 10:29:05.920144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190df118 00:32:32.674 [2024-04-17 10:29:05.921415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.674 [2024-04-17 10:29:05.921438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:32.674 [2024-04-17 10:29:05.933239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190feb58 00:32:32.674 [2024-04-17 10:29:05.934429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.674 [2024-04-17 10:29:05.934454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:32.674 [2024-04-17 10:29:05.947264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190fc128 00:32:32.674 [2024-04-17 10:29:05.948619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.674 [2024-04-17 10:29:05.948650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:32.674 [2024-04-17 10:29:05.960746] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190fc128 00:32:32.674 [2024-04-17 10:29:05.962169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.674 [2024-04-17 10:29:05.962194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.674 [2024-04-17 10:29:05.974199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e5658 00:32:32.674 [2024-04-17 10:29:05.975714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.674 [2024-04-17 10:29:05.975740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.674 [2024-04-17 10:29:05.986377] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f0788 00:32:32.674 [2024-04-17 10:29:05.987617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.674 [2024-04-17 10:29:05.987641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:32.674 [2024-04-17 10:29:05.999830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e5a90 00:32:32.674 [2024-04-17 10:29:06.001121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.674 [2024-04-17 10:29:06.001146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:32.934 [2024-04-17 10:29:06.013278] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e5a90 00:32:32.934 [2024-04-17 10:29:06.014579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.934 [2024-04-17 10:29:06.014603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:32.934 [2024-04-17 10:29:06.026745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e5a90 00:32:32.934 [2024-04-17 10:29:06.028056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.934 [2024-04-17 10:29:06.028080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:32.934 [2024-04-17 10:29:06.040190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e5a90 00:32:32.934 [2024-04-17 10:29:06.041515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.934 [2024-04-17 10:29:06.041539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:32.934 [2024-04-17 10:29:06.053635] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e5a90 00:32:32.934 [2024-04-17 10:29:06.054973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.934 [2024-04-17 10:29:06.054998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:32.934 [2024-04-17 10:29:06.066761] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f2510 00:32:32.934 [2024-04-17 10:29:06.067700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.934 [2024-04-17 10:29:06.067724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:32.934 [2024-04-17 10:29:06.080474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f46d0 00:32:32.934 [2024-04-17 10:29:06.081667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.934 [2024-04-17 10:29:06.081691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.934 [2024-04-17 10:29:06.093956] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190ef6a8 00:32:32.934 [2024-04-17 10:29:06.095164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.934 [2024-04-17 10:29:06.095189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:32.934 [2024-04-17 10:29:06.107401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190fe720 00:32:32.934 [2024-04-17 10:29:06.108601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.934 [2024-04-17 10:29:06.108626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:32.934 [2024-04-17 10:29:06.121114] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e9e10 00:32:32.934 [2024-04-17 10:29:06.121374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.934 [2024-04-17 10:29:06.121398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:32.934 [2024-04-17 10:29:06.134729] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f92c0 00:32:32.934 [2024-04-17 10:29:06.135116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.934 [2024-04-17 10:29:06.135141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:32.934 [2024-04-17 10:29:06.148156] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f8e88 00:32:32.934 [2024-04-17 10:29:06.148504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.934 [2024-04-17 10:29:06.148533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:32.934 [2024-04-17 10:29:06.161594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190fef90 00:32:32.934 [2024-04-17 10:29:06.161905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.934 [2024-04-17 10:29:06.161930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:32.934 [2024-04-17 10:29:06.177355] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f46d0 00:32:32.934 [2024-04-17 10:29:06.179261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.934 [2024-04-17 10:29:06.179285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.934 [2024-04-17 10:29:06.190810] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e1f80 00:32:32.934 [2024-04-17 10:29:06.192761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.934 [2024-04-17 10:29:06.192785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:32.934 [2024-04-17 10:29:06.204292] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e0630 00:32:32.934 [2024-04-17 10:29:06.206224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.934 [2024-04-17 10:29:06.206249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:32.934 [2024-04-17 10:29:06.217762] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190f96f8 00:32:32.934 [2024-04-17 10:29:06.219702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.934 [2024-04-17 10:29:06.219727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:32.934 [2024-04-17 10:29:06.231479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e0ea0 00:32:32.934 [2024-04-17 10:29:06.233297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.934 [2024-04-17 10:29:06.233321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:32.934 [2024-04-17 10:29:06.244943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de60d0) with pdu=0x2000190e1f80 00:32:32.934 [2024-04-17 10:29:06.246313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:32.934 [2024-04-17 10:29:06.246337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:32.934 00:32:32.934 Latency(us) 00:32:32.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:32.934 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:32.934 nvme0n1 : 2.01 18840.65 73.60 0.00 0.00 6785.32 3410.85 17158.52 00:32:32.934 =================================================================================================================== 00:32:32.934 Total : 18840.65 73.60 0.00 0.00 6785.32 3410.85 17158.52 00:32:32.934 0 00:32:33.193 10:29:06 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:33.193 10:29:06 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:33.193 10:29:06 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:33.193 | .driver_specific 00:32:33.193 | .nvme_error 00:32:33.193 | .status_code 00:32:33.193 | .command_transient_transport_error' 00:32:33.193 10:29:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:33.193 10:29:06 -- host/digest.sh@71 -- # (( 148 > 0 )) 00:32:33.193 10:29:06 -- host/digest.sh@73 -- # killprocess 3644009 00:32:33.193 10:29:06 -- common/autotest_common.sh@926 -- # '[' -z 3644009 ']' 00:32:33.193 10:29:06 -- common/autotest_common.sh@930 -- # kill -0 3644009 00:32:33.193 10:29:06 -- common/autotest_common.sh@931 -- # uname 00:32:33.193 10:29:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:33.193 10:29:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3644009 00:32:33.452 10:29:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:33.452 10:29:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:33.452 10:29:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3644009' 00:32:33.452 killing process with pid 3644009 00:32:33.452 10:29:06 -- common/autotest_common.sh@945 -- # kill 3644009 00:32:33.452 Received shutdown signal, test time was about 2.000000 seconds 00:32:33.452 00:32:33.452 Latency(us) 00:32:33.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:33.452 =================================================================================================================== 00:32:33.452 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:33.452 10:29:06 -- common/autotest_common.sh@950 -- # wait 3644009 00:32:33.452 10:29:06 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:32:33.452 10:29:06 -- host/digest.sh@54 -- # local rw bs qd 00:32:33.452 10:29:06 -- host/digest.sh@56 -- # rw=randwrite 00:32:33.452 10:29:06 -- host/digest.sh@56 -- # bs=131072 00:32:33.452 10:29:06 -- host/digest.sh@56 -- # qd=16 00:32:33.711 10:29:06 -- host/digest.sh@58 -- # bperfpid=3644644 00:32:33.711 10:29:06 -- host/digest.sh@60 -- # waitforlisten 3644644 /var/tmp/bperf.sock 00:32:33.711 10:29:06 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:33.711 10:29:06 -- common/autotest_common.sh@819 -- # '[' -z 3644644 ']' 00:32:33.711 10:29:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:33.711 10:29:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:33.711 10:29:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:33.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:33.711 10:29:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:33.711 10:29:06 -- common/autotest_common.sh@10 -- # set +x 00:32:33.711 [2024-04-17 10:29:06.827808] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:33.711 [2024-04-17 10:29:06.827867] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3644644 ] 00:32:33.711 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:33.711 Zero copy mechanism will not be used. 00:32:33.711 EAL: No free 2048 kB hugepages reported on node 1 00:32:33.711 [2024-04-17 10:29:06.900988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.711 [2024-04-17 10:29:06.984095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:34.646 10:29:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:34.647 10:29:07 -- common/autotest_common.sh@852 -- # return 0 00:32:34.647 10:29:07 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:34.647 10:29:07 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:34.905 10:29:07 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:34.905 10:29:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:34.905 10:29:07 -- common/autotest_common.sh@10 -- # set +x 00:32:34.905 10:29:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:34.905 10:29:07 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:34.905 10:29:07 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:35.163 nvme0n1 00:32:35.163 10:29:08 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:35.163 10:29:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:35.163 10:29:08 -- common/autotest_common.sh@10 -- # set +x 00:32:35.163 10:29:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:35.163 10:29:08 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:35.163 10:29:08 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:35.163 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:35.163 Zero copy mechanism will not be used. 00:32:35.163 Running I/O for 2 seconds... 00:32:35.423 [2024-04-17 10:29:08.496651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.423 [2024-04-17 10:29:08.496890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.423 [2024-04-17 10:29:08.496923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.423 [2024-04-17 10:29:08.504088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.423 [2024-04-17 10:29:08.504227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.423 [2024-04-17 10:29:08.504254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.423 [2024-04-17 10:29:08.510458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.423 [2024-04-17 10:29:08.510550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.423 [2024-04-17 10:29:08.510575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.423 [2024-04-17 10:29:08.515979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.423 [2024-04-17 10:29:08.516151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.423 [2024-04-17 10:29:08.516178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.423 [2024-04-17 10:29:08.521141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.423 [2024-04-17 10:29:08.521268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.423 [2024-04-17 10:29:08.521292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.423 [2024-04-17 10:29:08.526305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.423 [2024-04-17 10:29:08.526396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.423 [2024-04-17 10:29:08.526420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.423 [2024-04-17 10:29:08.532222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.423 [2024-04-17 10:29:08.532338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.423 [2024-04-17 10:29:08.532363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.423 [2024-04-17 10:29:08.537940] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.423 [2024-04-17 10:29:08.538255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.423 [2024-04-17 10:29:08.538282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.423 [2024-04-17 10:29:08.543009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.423 [2024-04-17 10:29:08.543293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.423 [2024-04-17 10:29:08.543318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.423 [2024-04-17 10:29:08.548104] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.423 [2024-04-17 10:29:08.548243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.423 [2024-04-17 10:29:08.548267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.423 [2024-04-17 10:29:08.553081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.423 [2024-04-17 10:29:08.553203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.423 [2024-04-17 10:29:08.553226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.423 [2024-04-17 10:29:08.558655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.423 [2024-04-17 10:29:08.558741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.423 [2024-04-17 10:29:08.558766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.424 [2024-04-17 10:29:08.564142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.424 [2024-04-17 10:29:08.564285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.424 [2024-04-17 10:29:08.564309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.424 [2024-04-17 10:29:08.569173] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.424 [2024-04-17 10:29:08.569439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.424 [2024-04-17 10:29:08.569465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.424 [2024-04-17 10:29:08.574666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.424 [2024-04-17 10:29:08.574864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.424 [2024-04-17 10:29:08.574899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.424 [2024-04-17 10:29:08.581206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.424 [2024-04-17 10:29:08.581582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.424 [2024-04-17 10:29:08.581607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.424 [2024-04-17 10:29:08.587393] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.424 [2024-04-17 10:29:08.587634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.424 [2024-04-17 10:29:08.587667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.424 [2024-04-17 10:29:08.595572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.424 [2024-04-17 10:29:08.595745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.424 [2024-04-17 10:29:08.595769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.424 [2024-04-17 10:29:08.602555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.424 [2024-04-17 10:29:08.602651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.424 [2024-04-17 10:29:08.602676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.424 [2024-04-17 10:29:08.609306] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.424 [2024-04-17 10:29:08.609443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.424 [2024-04-17 10:29:08.609465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.424 [2024-04-17 10:29:08.616369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.424 [2024-04-17 10:29:08.616501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.424 [2024-04-17 10:29:08.616524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.424 [2024-04-17 10:29:08.622140] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.424 [2024-04-17 10:29:08.622281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.424 [2024-04-17 10:29:08.622305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.424 [2024-04-17 10:29:08.627256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.424 [2024-04-17 10:29:08.627474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.424 [2024-04-17 10:29:08.627500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.424 [2024-04-17 10:29:08.632397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.424 [2024-04-17 10:29:08.632724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.424 [2024-04-17 10:29:08.632749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.424 [2024-04-17 10:29:08.637363] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.424 [2024-04-17 10:29:08.637678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.424 [2024-04-17 10:29:08.637703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.424 [2024-04-17 10:29:08.642515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.424 [2024-04-17 10:29:08.642769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.424 [2024-04-17 10:29:08.642795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.424 [2024-04-17 10:29:08.647821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.424 [2024-04-17 10:29:08.647899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.424 [2024-04-17 10:29:08.647921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.424 [2024-04-17 10:29:08.654198] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.424 [2024-04-17 10:29:08.654281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.424 [2024-04-17 10:29:08.654305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.424 [2024-04-17 10:29:08.660209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.424 [2024-04-17 10:29:08.660318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.424 [2024-04-17 10:29:08.660341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.424 [2024-04-17 10:29:08.666276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.424 [2024-04-17 10:29:08.666473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.424 [2024-04-17 10:29:08.666497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.424 [2024-04-17 10:29:08.673138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.424 [2024-04-17 10:29:08.673296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.424 [2024-04-17 10:29:08.673320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.424 [2024-04-17 10:29:08.679496] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.424 [2024-04-17 10:29:08.679793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.424 [2024-04-17 10:29:08.679824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.424 [2024-04-17 10:29:08.685799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.424 [2024-04-17 10:29:08.686005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.424 [2024-04-17 10:29:08.686030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.424 [2024-04-17 10:29:08.691765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.424 [2024-04-17 10:29:08.692023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.424 [2024-04-17 10:29:08.692047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.424 [2024-04-17 10:29:08.698755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.424 [2024-04-17 10:29:08.698918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.424 [2024-04-17 10:29:08.698941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.424 [2024-04-17 10:29:08.705174] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.424 [2024-04-17 10:29:08.705310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.424 [2024-04-17 10:29:08.705334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.424 [2024-04-17 10:29:08.711700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.424 [2024-04-17 10:29:08.711803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.424 [2024-04-17 10:29:08.711827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.424 [2024-04-17 10:29:08.717389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.424 [2024-04-17 10:29:08.717524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.424 [2024-04-17 10:29:08.717548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.424 [2024-04-17 10:29:08.723429] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.424 [2024-04-17 10:29:08.723561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.425 [2024-04-17 10:29:08.723585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.425 [2024-04-17 10:29:08.730041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.425 [2024-04-17 10:29:08.730323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.425 [2024-04-17 10:29:08.730348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.425 [2024-04-17 10:29:08.735921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.425 [2024-04-17 10:29:08.736216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.425 [2024-04-17 10:29:08.736243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.425 [2024-04-17 10:29:08.741250] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.425 [2024-04-17 10:29:08.741512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.425 [2024-04-17 10:29:08.741538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.425 [2024-04-17 10:29:08.746270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.425 [2024-04-17 10:29:08.746381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.425 [2024-04-17 10:29:08.746405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.425 [2024-04-17 10:29:08.751265] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.425 [2024-04-17 10:29:08.751393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.425 [2024-04-17 10:29:08.751417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.684 [2024-04-17 10:29:08.756275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.684 [2024-04-17 10:29:08.756384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.684 [2024-04-17 10:29:08.756407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.684 [2024-04-17 10:29:08.761262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.684 [2024-04-17 10:29:08.761450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.684 [2024-04-17 10:29:08.761474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.684 [2024-04-17 10:29:08.766336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.684 [2024-04-17 10:29:08.766489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.684 [2024-04-17 10:29:08.766512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.684 [2024-04-17 10:29:08.771476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.684 [2024-04-17 10:29:08.771769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.684 [2024-04-17 10:29:08.771794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.684 [2024-04-17 10:29:08.776445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.684 [2024-04-17 10:29:08.776766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.684 [2024-04-17 10:29:08.776792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.684 [2024-04-17 10:29:08.781660] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.684 [2024-04-17 10:29:08.781839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.684 [2024-04-17 10:29:08.781863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.684 [2024-04-17 10:29:08.788880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.684 [2024-04-17 10:29:08.789071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.684 [2024-04-17 10:29:08.789094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.684 [2024-04-17 10:29:08.796515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.684 [2024-04-17 10:29:08.796601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.684 [2024-04-17 10:29:08.796625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.684 [2024-04-17 10:29:08.801993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.684 [2024-04-17 10:29:08.802113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.684 [2024-04-17 10:29:08.802138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.684 [2024-04-17 10:29:08.807056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.684 [2024-04-17 10:29:08.807214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.684 [2024-04-17 10:29:08.807239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.684 [2024-04-17 10:29:08.812085] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.684 [2024-04-17 10:29:08.812241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.684 [2024-04-17 10:29:08.812265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.684 [2024-04-17 10:29:08.817953] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.684 [2024-04-17 10:29:08.818262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.684 [2024-04-17 10:29:08.818288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.684 [2024-04-17 10:29:08.824435] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.684 [2024-04-17 10:29:08.824726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.685 [2024-04-17 10:29:08.824751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.685 [2024-04-17 10:29:08.829925] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.685 [2024-04-17 10:29:08.830087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.685 [2024-04-17 10:29:08.830119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.685 [2024-04-17 10:29:08.834956] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.685 [2024-04-17 10:29:08.835092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.685 [2024-04-17 10:29:08.835116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.685 [2024-04-17 10:29:08.839897] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.685 [2024-04-17 10:29:08.840029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.685 [2024-04-17 10:29:08.840052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.685 [2024-04-17 10:29:08.844850] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.685 [2024-04-17 10:29:08.844958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.685 [2024-04-17 10:29:08.844982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.685 [2024-04-17 10:29:08.849844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.685 [2024-04-17 10:29:08.849998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.685 [2024-04-17 10:29:08.850020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.685 [2024-04-17 10:29:08.855264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.685 [2024-04-17 10:29:08.855423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.685 [2024-04-17 10:29:08.855446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.685 [2024-04-17 10:29:08.862102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.685 [2024-04-17 10:29:08.862384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.685 [2024-04-17 10:29:08.862409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.685 [2024-04-17 10:29:08.867231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.685 [2024-04-17 10:29:08.867483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.685 [2024-04-17 10:29:08.867508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.685 [2024-04-17 10:29:08.873286] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.685 [2024-04-17 10:29:08.873552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.685 [2024-04-17 10:29:08.873577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.685 [2024-04-17 10:29:08.881234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.685 [2024-04-17 10:29:08.881424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.685 [2024-04-17 10:29:08.881448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.685 [2024-04-17 10:29:08.889664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.685 [2024-04-17 10:29:08.889870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.685 [2024-04-17 10:29:08.889894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.685 [2024-04-17 10:29:08.898234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.685 [2024-04-17 10:29:08.898392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.685 [2024-04-17 10:29:08.898416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.685 [2024-04-17 10:29:08.906663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.685 [2024-04-17 10:29:08.906881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.685 [2024-04-17 10:29:08.906907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.685 [2024-04-17 10:29:08.915179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.685 [2024-04-17 10:29:08.915434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.685 [2024-04-17 10:29:08.915459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.685 [2024-04-17 10:29:08.923874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.685 [2024-04-17 10:29:08.924210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.685 [2024-04-17 10:29:08.924235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.685 [2024-04-17 10:29:08.930495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.685 [2024-04-17 10:29:08.930747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.685 [2024-04-17 10:29:08.930774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.685 [2024-04-17 10:29:08.935699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.685 [2024-04-17 10:29:08.935918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.685 [2024-04-17 10:29:08.935944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.685 [2024-04-17 10:29:08.940892] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.685 [2024-04-17 10:29:08.941089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.685 [2024-04-17 10:29:08.941113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.685 [2024-04-17 10:29:08.946002] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.685 [2024-04-17 10:29:08.946141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.685 [2024-04-17 10:29:08.946164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.685 [2024-04-17 10:29:08.951345] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.685 [2024-04-17 10:29:08.951451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.685 [2024-04-17 10:29:08.951475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.685 [2024-04-17 10:29:08.958039] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.685 [2024-04-17 10:29:08.958174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.685 [2024-04-17 10:29:08.958198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.685 [2024-04-17 10:29:08.963813] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.685 [2024-04-17 10:29:08.964018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.685 [2024-04-17 10:29:08.964041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.685 [2024-04-17 10:29:08.971178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.685 [2024-04-17 10:29:08.971493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.685 [2024-04-17 10:29:08.971518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.685 [2024-04-17 10:29:08.977673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.685 [2024-04-17 10:29:08.977838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.685 [2024-04-17 10:29:08.977862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.685 [2024-04-17 10:29:08.986443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.685 [2024-04-17 10:29:08.986709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.685 [2024-04-17 10:29:08.986736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.685 [2024-04-17 10:29:08.993630] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.685 [2024-04-17 10:29:08.993785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.685 [2024-04-17 10:29:08.993810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.685 [2024-04-17 10:29:09.001429] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.685 [2024-04-17 10:29:09.001536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.686 [2024-04-17 10:29:09.001565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.686 [2024-04-17 10:29:09.006828] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.686 [2024-04-17 10:29:09.006965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.686 [2024-04-17 10:29:09.006989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.686 [2024-04-17 10:29:09.011854] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.686 [2024-04-17 10:29:09.012004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.686 [2024-04-17 10:29:09.012028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.946 [2024-04-17 10:29:09.016916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.946 [2024-04-17 10:29:09.017120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.946 [2024-04-17 10:29:09.017144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.946 [2024-04-17 10:29:09.022056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.946 [2024-04-17 10:29:09.022350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.946 [2024-04-17 10:29:09.022376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.946 [2024-04-17 10:29:09.027025] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.946 [2024-04-17 10:29:09.027241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.946 [2024-04-17 10:29:09.027266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.946 [2024-04-17 10:29:09.032188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.946 [2024-04-17 10:29:09.032423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.946 [2024-04-17 10:29:09.032448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.946 [2024-04-17 10:29:09.037329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.946 [2024-04-17 10:29:09.037584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.946 [2024-04-17 10:29:09.037610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.946 [2024-04-17 10:29:09.043488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.946 [2024-04-17 10:29:09.043674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.946 [2024-04-17 10:29:09.043697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.946 [2024-04-17 10:29:09.049802] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.946 [2024-04-17 10:29:09.049955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.946 [2024-04-17 10:29:09.049979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.946 [2024-04-17 10:29:09.056088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.946 [2024-04-17 10:29:09.056224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.946 [2024-04-17 10:29:09.056248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.946 [2024-04-17 10:29:09.061866] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.946 [2024-04-17 10:29:09.062065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.946 [2024-04-17 10:29:09.062089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.946 [2024-04-17 10:29:09.067045] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.946 [2024-04-17 10:29:09.067362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.946 [2024-04-17 10:29:09.067387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.946 [2024-04-17 10:29:09.074096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.946 [2024-04-17 10:29:09.074283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.946 [2024-04-17 10:29:09.074307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.946 [2024-04-17 10:29:09.081355] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.946 [2024-04-17 10:29:09.081522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.946 [2024-04-17 10:29:09.081546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.946 [2024-04-17 10:29:09.087944] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.946 [2024-04-17 10:29:09.088129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.947 [2024-04-17 10:29:09.088152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.947 [2024-04-17 10:29:09.094140] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.947 [2024-04-17 10:29:09.094240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.947 [2024-04-17 10:29:09.094264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.947 [2024-04-17 10:29:09.099639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.947 [2024-04-17 10:29:09.099745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.947 [2024-04-17 10:29:09.099772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.947 [2024-04-17 10:29:09.107288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.947 [2024-04-17 10:29:09.107511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.947 [2024-04-17 10:29:09.107537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.947 [2024-04-17 10:29:09.113241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.947 [2024-04-17 10:29:09.113448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.947 [2024-04-17 10:29:09.113474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.947 [2024-04-17 10:29:09.118444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.947 [2024-04-17 10:29:09.118751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.947 [2024-04-17 10:29:09.118778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.947 [2024-04-17 10:29:09.123539] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.947 [2024-04-17 10:29:09.123789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.947 [2024-04-17 10:29:09.123814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.947 [2024-04-17 10:29:09.128801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.947 [2024-04-17 10:29:09.128920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.947 [2024-04-17 10:29:09.128943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.947 [2024-04-17 10:29:09.134248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.947 [2024-04-17 10:29:09.134436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.947 [2024-04-17 10:29:09.134461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.947 [2024-04-17 10:29:09.139303] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.947 [2024-04-17 10:29:09.139429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.947 [2024-04-17 10:29:09.139453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.947 [2024-04-17 10:29:09.144442] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.947 [2024-04-17 10:29:09.144550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.947 [2024-04-17 10:29:09.144572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.947 [2024-04-17 10:29:09.149684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.947 [2024-04-17 10:29:09.149846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.947 [2024-04-17 10:29:09.149871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.947 [2024-04-17 10:29:09.154780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.947 [2024-04-17 10:29:09.154979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.947 [2024-04-17 10:29:09.155002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.947 [2024-04-17 10:29:09.160000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.947 [2024-04-17 10:29:09.160310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.947 [2024-04-17 10:29:09.160335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.947 [2024-04-17 10:29:09.167434] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.947 [2024-04-17 10:29:09.167743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.947 [2024-04-17 10:29:09.167769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.947 [2024-04-17 10:29:09.174408] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.947 [2024-04-17 10:29:09.174531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.947 [2024-04-17 10:29:09.174554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.947 [2024-04-17 10:29:09.179531] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.947 [2024-04-17 10:29:09.179708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.947 [2024-04-17 10:29:09.179731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.947 [2024-04-17 10:29:09.184630] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.947 [2024-04-17 10:29:09.184742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.947 [2024-04-17 10:29:09.184765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.947 [2024-04-17 10:29:09.189867] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.947 [2024-04-17 10:29:09.189991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.947 [2024-04-17 10:29:09.190015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.947 [2024-04-17 10:29:09.194946] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.947 [2024-04-17 10:29:09.195105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.947 [2024-04-17 10:29:09.195129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.947 [2024-04-17 10:29:09.200484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.947 [2024-04-17 10:29:09.200740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.947 [2024-04-17 10:29:09.200766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.947 [2024-04-17 10:29:09.207153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.947 [2024-04-17 10:29:09.207435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.947 [2024-04-17 10:29:09.207461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.947 [2024-04-17 10:29:09.213277] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.947 [2024-04-17 10:29:09.213523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.947 [2024-04-17 10:29:09.213547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.947 [2024-04-17 10:29:09.221229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.947 [2024-04-17 10:29:09.221358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.947 [2024-04-17 10:29:09.221381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.947 [2024-04-17 10:29:09.227088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.947 [2024-04-17 10:29:09.227223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.947 [2024-04-17 10:29:09.227246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.947 [2024-04-17 10:29:09.233966] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.947 [2024-04-17 10:29:09.234061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.947 [2024-04-17 10:29:09.234085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.947 [2024-04-17 10:29:09.240255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.947 [2024-04-17 10:29:09.240361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.947 [2024-04-17 10:29:09.240385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.947 [2024-04-17 10:29:09.245746] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.947 [2024-04-17 10:29:09.245874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.948 [2024-04-17 10:29:09.245897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.948 [2024-04-17 10:29:09.251296] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.948 [2024-04-17 10:29:09.251491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.948 [2024-04-17 10:29:09.251519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.948 [2024-04-17 10:29:09.256972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.948 [2024-04-17 10:29:09.257250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.948 [2024-04-17 10:29:09.257275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.948 [2024-04-17 10:29:09.261983] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.948 [2024-04-17 10:29:09.262195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.948 [2024-04-17 10:29:09.262220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.948 [2024-04-17 10:29:09.267300] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.948 [2024-04-17 10:29:09.267427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.948 [2024-04-17 10:29:09.267451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.948 [2024-04-17 10:29:09.272755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:35.948 [2024-04-17 10:29:09.273039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:35.948 [2024-04-17 10:29:09.273064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.207 [2024-04-17 10:29:09.280621] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.207 [2024-04-17 10:29:09.280759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.207 [2024-04-17 10:29:09.280782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.207 [2024-04-17 10:29:09.287777] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.207 [2024-04-17 10:29:09.287887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.207 [2024-04-17 10:29:09.287910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.207 [2024-04-17 10:29:09.294775] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.207 [2024-04-17 10:29:09.294930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.207 [2024-04-17 10:29:09.294953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.207 [2024-04-17 10:29:09.302107] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.207 [2024-04-17 10:29:09.302309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.207 [2024-04-17 10:29:09.302333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.207 [2024-04-17 10:29:09.308227] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.207 [2024-04-17 10:29:09.308534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.207 [2024-04-17 10:29:09.308561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.207 [2024-04-17 10:29:09.314226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.207 [2024-04-17 10:29:09.314405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.207 [2024-04-17 10:29:09.314428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.207 [2024-04-17 10:29:09.320001] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.207 [2024-04-17 10:29:09.320139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.207 [2024-04-17 10:29:09.320163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.207 [2024-04-17 10:29:09.325615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.207 [2024-04-17 10:29:09.325807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.207 [2024-04-17 10:29:09.325830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.207 [2024-04-17 10:29:09.331068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.207 [2024-04-17 10:29:09.331251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.207 [2024-04-17 10:29:09.331275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.207 [2024-04-17 10:29:09.336522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.207 [2024-04-17 10:29:09.336684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.207 [2024-04-17 10:29:09.336708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.207 [2024-04-17 10:29:09.341925] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.207 [2024-04-17 10:29:09.342097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.207 [2024-04-17 10:29:09.342121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.207 [2024-04-17 10:29:09.348580] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.207 [2024-04-17 10:29:09.348717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.207 [2024-04-17 10:29:09.348742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.207 [2024-04-17 10:29:09.355172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.207 [2024-04-17 10:29:09.355495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.207 [2024-04-17 10:29:09.355521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.207 [2024-04-17 10:29:09.362730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.207 [2024-04-17 10:29:09.363000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.207 [2024-04-17 10:29:09.363026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.207 [2024-04-17 10:29:09.369041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.207 [2024-04-17 10:29:09.369182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.207 [2024-04-17 10:29:09.369205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.207 [2024-04-17 10:29:09.374200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.207 [2024-04-17 10:29:09.374381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.207 [2024-04-17 10:29:09.374404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.379188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.379307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.379331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.384181] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.384281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.384305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.389142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.389295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.389319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.394183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.394395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.394420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.399235] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.399526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.399551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.404233] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.404447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.404477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.409146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.409270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.409294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.414146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.414336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.414359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.419066] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.419195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.419219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.423987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.424097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.424120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.430001] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.430179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.430203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.435408] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.435618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.435642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.440471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.440756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.440782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.445516] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.445740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.445765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.450472] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.450585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.450609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.455541] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.455679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.455702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.460538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.460657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.460681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.465522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.465633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.465665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.470526] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.470691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.470715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.475577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.475784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.475809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.480609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.480914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.480940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.485620] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.485856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.485880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.490627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.490751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.490778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.495595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.495749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.495772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.500536] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.500668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.500692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.505465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.505619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.505650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.510466] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.510602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.510628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.515570] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.515778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.515802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.520673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.520962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.520989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.525740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.525954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.525980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.530669] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.530768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.530791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.208 [2024-04-17 10:29:09.535669] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.208 [2024-04-17 10:29:09.535830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.208 [2024-04-17 10:29:09.535853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.468 [2024-04-17 10:29:09.540611] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.468 [2024-04-17 10:29:09.540753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.468 [2024-04-17 10:29:09.540777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.468 [2024-04-17 10:29:09.545549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.468 [2024-04-17 10:29:09.545667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.468 [2024-04-17 10:29:09.545691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.468 [2024-04-17 10:29:09.550578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.468 [2024-04-17 10:29:09.550717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.468 [2024-04-17 10:29:09.550740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.468 [2024-04-17 10:29:09.555616] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.468 [2024-04-17 10:29:09.555827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.468 [2024-04-17 10:29:09.555850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.468 [2024-04-17 10:29:09.560631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.468 [2024-04-17 10:29:09.560918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.468 [2024-04-17 10:29:09.560944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.468 [2024-04-17 10:29:09.565675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.468 [2024-04-17 10:29:09.565881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.468 [2024-04-17 10:29:09.565905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.468 [2024-04-17 10:29:09.570593] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.468 [2024-04-17 10:29:09.570711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.468 [2024-04-17 10:29:09.570734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.468 [2024-04-17 10:29:09.575662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.468 [2024-04-17 10:29:09.575810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.468 [2024-04-17 10:29:09.575833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.468 [2024-04-17 10:29:09.580586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.468 [2024-04-17 10:29:09.580695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.468 [2024-04-17 10:29:09.580718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.468 [2024-04-17 10:29:09.585581] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.468 [2024-04-17 10:29:09.585732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.468 [2024-04-17 10:29:09.585756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.468 [2024-04-17 10:29:09.591704] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.468 [2024-04-17 10:29:09.591930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.468 [2024-04-17 10:29:09.591954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.468 [2024-04-17 10:29:09.598732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.468 [2024-04-17 10:29:09.599007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.469 [2024-04-17 10:29:09.599031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.469 [2024-04-17 10:29:09.606624] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.469 [2024-04-17 10:29:09.606888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.469 [2024-04-17 10:29:09.606914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.469 [2024-04-17 10:29:09.613836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.469 [2024-04-17 10:29:09.614022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.469 [2024-04-17 10:29:09.614046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.469 [2024-04-17 10:29:09.621110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.469 [2024-04-17 10:29:09.621244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.469 [2024-04-17 10:29:09.621267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.469 [2024-04-17 10:29:09.627009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.469 [2024-04-17 10:29:09.627201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.469 [2024-04-17 10:29:09.627225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.469 [2024-04-17 10:29:09.632076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.469 [2024-04-17 10:29:09.632201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.469 [2024-04-17 10:29:09.632229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.469 [2024-04-17 10:29:09.637072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.469 [2024-04-17 10:29:09.637205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.469 [2024-04-17 10:29:09.637228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.469 [2024-04-17 10:29:09.642417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.469 [2024-04-17 10:29:09.642564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.469 [2024-04-17 10:29:09.642588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.469 [2024-04-17 10:29:09.647558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.469 [2024-04-17 10:29:09.647773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.469 [2024-04-17 10:29:09.647798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.469 [2024-04-17 10:29:09.652756] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.469 [2024-04-17 10:29:09.653051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.469 [2024-04-17 10:29:09.653077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.469 [2024-04-17 10:29:09.657744] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.469 [2024-04-17 10:29:09.657957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.469 [2024-04-17 10:29:09.657980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.469 [2024-04-17 10:29:09.662676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.469 [2024-04-17 10:29:09.662782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.469 [2024-04-17 10:29:09.662805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.469 [2024-04-17 10:29:09.667971] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.469 [2024-04-17 10:29:09.668130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.469 [2024-04-17 10:29:09.668154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.469 [2024-04-17 10:29:09.672931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.469 [2024-04-17 10:29:09.673031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.469 [2024-04-17 10:29:09.673054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.469 [2024-04-17 10:29:09.677875] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.469 [2024-04-17 10:29:09.677977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.469 [2024-04-17 10:29:09.678002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.469 [2024-04-17 10:29:09.682895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.469 [2024-04-17 10:29:09.683056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.469 [2024-04-17 10:29:09.683080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.469 [2024-04-17 10:29:09.688363] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.469 [2024-04-17 10:29:09.688566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.469 [2024-04-17 10:29:09.688590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.469 [2024-04-17 10:29:09.693730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.469 [2024-04-17 10:29:09.693941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.469 [2024-04-17 10:29:09.693966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.469 [2024-04-17 10:29:09.700367] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.469 [2024-04-17 10:29:09.700633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.469 [2024-04-17 10:29:09.700665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.469 [2024-04-17 10:29:09.706573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.469 [2024-04-17 10:29:09.706722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.469 [2024-04-17 10:29:09.706746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.469 [2024-04-17 10:29:09.711785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.469 [2024-04-17 10:29:09.711940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.469 [2024-04-17 10:29:09.711964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.469 [2024-04-17 10:29:09.716869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.469 [2024-04-17 10:29:09.717014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.469 [2024-04-17 10:29:09.717037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.469 [2024-04-17 10:29:09.722693] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.469 [2024-04-17 10:29:09.722838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.469 [2024-04-17 10:29:09.722861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.469 [2024-04-17 10:29:09.728350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.469 [2024-04-17 10:29:09.728535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.469 [2024-04-17 10:29:09.728559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.469 [2024-04-17 10:29:09.733446] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.469 [2024-04-17 10:29:09.733696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.469 [2024-04-17 10:29:09.733722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.469 [2024-04-17 10:29:09.738559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.469 [2024-04-17 10:29:09.738857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.469 [2024-04-17 10:29:09.738883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.469 [2024-04-17 10:29:09.743600] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.469 [2024-04-17 10:29:09.743827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.469 [2024-04-17 10:29:09.743852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.469 [2024-04-17 10:29:09.748555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.470 [2024-04-17 10:29:09.748719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.470 [2024-04-17 10:29:09.748743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.470 [2024-04-17 10:29:09.753560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.470 [2024-04-17 10:29:09.753784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.470 [2024-04-17 10:29:09.753809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.470 [2024-04-17 10:29:09.758574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.470 [2024-04-17 10:29:09.758729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.470 [2024-04-17 10:29:09.758751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.470 [2024-04-17 10:29:09.763578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.470 [2024-04-17 10:29:09.763687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.470 [2024-04-17 10:29:09.763711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.470 [2024-04-17 10:29:09.769137] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.470 [2024-04-17 10:29:09.769243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.470 [2024-04-17 10:29:09.769271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.470 [2024-04-17 10:29:09.775594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.470 [2024-04-17 10:29:09.775795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.470 [2024-04-17 10:29:09.775820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.470 [2024-04-17 10:29:09.782470] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.470 [2024-04-17 10:29:09.782777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.470 [2024-04-17 10:29:09.782802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.470 [2024-04-17 10:29:09.789418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.470 [2024-04-17 10:29:09.789625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.470 [2024-04-17 10:29:09.789657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.470 [2024-04-17 10:29:09.795125] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.470 [2024-04-17 10:29:09.795302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.470 [2024-04-17 10:29:09.795326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.730 [2024-04-17 10:29:09.800730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.730 [2024-04-17 10:29:09.800847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.730 [2024-04-17 10:29:09.800873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.730 [2024-04-17 10:29:09.805844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.730 [2024-04-17 10:29:09.806017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.730 [2024-04-17 10:29:09.806041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.730 [2024-04-17 10:29:09.811574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.730 [2024-04-17 10:29:09.811767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.730 [2024-04-17 10:29:09.811791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.730 [2024-04-17 10:29:09.817391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.730 [2024-04-17 10:29:09.817614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.730 [2024-04-17 10:29:09.817639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.730 [2024-04-17 10:29:09.822561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.730 [2024-04-17 10:29:09.822767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.730 [2024-04-17 10:29:09.822791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.730 [2024-04-17 10:29:09.827790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.730 [2024-04-17 10:29:09.828057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.730 [2024-04-17 10:29:09.828083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.730 [2024-04-17 10:29:09.832857] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.730 [2024-04-17 10:29:09.833112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.730 [2024-04-17 10:29:09.833137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.730 [2024-04-17 10:29:09.837943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.730 [2024-04-17 10:29:09.838106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.730 [2024-04-17 10:29:09.838130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.730 [2024-04-17 10:29:09.842948] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.730 [2024-04-17 10:29:09.843157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.730 [2024-04-17 10:29:09.843182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.730 [2024-04-17 10:29:09.847948] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.730 [2024-04-17 10:29:09.848096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.730 [2024-04-17 10:29:09.848120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.730 [2024-04-17 10:29:09.852929] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.730 [2024-04-17 10:29:09.853080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.730 [2024-04-17 10:29:09.853104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.730 [2024-04-17 10:29:09.858440] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.730 [2024-04-17 10:29:09.858621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.730 [2024-04-17 10:29:09.858652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.730 [2024-04-17 10:29:09.864546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.730 [2024-04-17 10:29:09.864848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.730 [2024-04-17 10:29:09.864873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.730 [2024-04-17 10:29:09.870346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.730 [2024-04-17 10:29:09.870702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.730 [2024-04-17 10:29:09.870728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.730 [2024-04-17 10:29:09.875967] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.730 [2024-04-17 10:29:09.876234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.731 [2024-04-17 10:29:09.876259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.731 [2024-04-17 10:29:09.882603] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.731 [2024-04-17 10:29:09.882804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.731 [2024-04-17 10:29:09.882828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.731 [2024-04-17 10:29:09.888913] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.731 [2024-04-17 10:29:09.889099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.731 [2024-04-17 10:29:09.889122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.731 [2024-04-17 10:29:09.896800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.731 [2024-04-17 10:29:09.896910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.731 [2024-04-17 10:29:09.896933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.731 [2024-04-17 10:29:09.904549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.731 [2024-04-17 10:29:09.904723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.731 [2024-04-17 10:29:09.904747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.731 [2024-04-17 10:29:09.912996] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.731 [2024-04-17 10:29:09.913191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.731 [2024-04-17 10:29:09.913215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.731 [2024-04-17 10:29:09.921536] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.731 [2024-04-17 10:29:09.921871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.731 [2024-04-17 10:29:09.921897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.731 [2024-04-17 10:29:09.929836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.731 [2024-04-17 10:29:09.930107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.731 [2024-04-17 10:29:09.930137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.731 [2024-04-17 10:29:09.937957] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.731 [2024-04-17 10:29:09.938222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.731 [2024-04-17 10:29:09.938248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.731 [2024-04-17 10:29:09.946525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.731 [2024-04-17 10:29:09.946705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.731 [2024-04-17 10:29:09.946729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.731 [2024-04-17 10:29:09.954875] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.731 [2024-04-17 10:29:09.955078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.731 [2024-04-17 10:29:09.955101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.731 [2024-04-17 10:29:09.963031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.731 [2024-04-17 10:29:09.963190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.731 [2024-04-17 10:29:09.963215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.731 [2024-04-17 10:29:09.971147] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.731 [2024-04-17 10:29:09.971359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.731 [2024-04-17 10:29:09.971384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.731 [2024-04-17 10:29:09.979137] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.731 [2024-04-17 10:29:09.979297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.731 [2024-04-17 10:29:09.979321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.731 [2024-04-17 10:29:09.987890] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.731 [2024-04-17 10:29:09.988076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.731 [2024-04-17 10:29:09.988100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.731 [2024-04-17 10:29:09.996543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.731 [2024-04-17 10:29:09.996902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.731 [2024-04-17 10:29:09.996927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.731 [2024-04-17 10:29:10.004513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.731 [2024-04-17 10:29:10.004864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.731 [2024-04-17 10:29:10.004890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.731 [2024-04-17 10:29:10.013993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.731 [2024-04-17 10:29:10.014168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.731 [2024-04-17 10:29:10.014195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.731 [2024-04-17 10:29:10.021313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.731 [2024-04-17 10:29:10.021473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.731 [2024-04-17 10:29:10.021499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.731 [2024-04-17 10:29:10.026578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.731 [2024-04-17 10:29:10.026708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.731 [2024-04-17 10:29:10.026733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.731 [2024-04-17 10:29:10.031912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.731 [2024-04-17 10:29:10.032073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.731 [2024-04-17 10:29:10.032099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.731 [2024-04-17 10:29:10.037606] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.731 [2024-04-17 10:29:10.037782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.731 [2024-04-17 10:29:10.037810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.731 [2024-04-17 10:29:10.042703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.731 [2024-04-17 10:29:10.042912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.731 [2024-04-17 10:29:10.042937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.731 [2024-04-17 10:29:10.047786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.731 [2024-04-17 10:29:10.048087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.731 [2024-04-17 10:29:10.048114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.731 [2024-04-17 10:29:10.053631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.731 [2024-04-17 10:29:10.053818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.731 [2024-04-17 10:29:10.053847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.731 [2024-04-17 10:29:10.059517] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.731 [2024-04-17 10:29:10.059651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.731 [2024-04-17 10:29:10.059675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.991 [2024-04-17 10:29:10.064550] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.991 [2024-04-17 10:29:10.064714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.991 [2024-04-17 10:29:10.064737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.992 [2024-04-17 10:29:10.069626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.992 [2024-04-17 10:29:10.069760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.992 [2024-04-17 10:29:10.069784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.992 [2024-04-17 10:29:10.074730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.992 [2024-04-17 10:29:10.074889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.992 [2024-04-17 10:29:10.074921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.992 [2024-04-17 10:29:10.080225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.992 [2024-04-17 10:29:10.080383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.992 [2024-04-17 10:29:10.080411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.992 [2024-04-17 10:29:10.085286] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.992 [2024-04-17 10:29:10.085510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.992 [2024-04-17 10:29:10.085537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.992 [2024-04-17 10:29:10.090395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.992 [2024-04-17 10:29:10.090695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.992 [2024-04-17 10:29:10.090721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.992 [2024-04-17 10:29:10.095386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.992 [2024-04-17 10:29:10.095624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.992 [2024-04-17 10:29:10.095657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.992 [2024-04-17 10:29:10.100325] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.992 [2024-04-17 10:29:10.100447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.992 [2024-04-17 10:29:10.100470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.992 [2024-04-17 10:29:10.105392] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.992 [2024-04-17 10:29:10.105539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.992 [2024-04-17 10:29:10.105563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.992 [2024-04-17 10:29:10.111208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.992 [2024-04-17 10:29:10.111387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.992 [2024-04-17 10:29:10.111411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.992 [2024-04-17 10:29:10.116747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.992 [2024-04-17 10:29:10.116845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.992 [2024-04-17 10:29:10.116868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.992 [2024-04-17 10:29:10.121802] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.992 [2024-04-17 10:29:10.121935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.992 [2024-04-17 10:29:10.121959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.992 [2024-04-17 10:29:10.126942] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.992 [2024-04-17 10:29:10.127144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.992 [2024-04-17 10:29:10.127168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.992 [2024-04-17 10:29:10.132093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.992 [2024-04-17 10:29:10.132389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.992 [2024-04-17 10:29:10.132415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.992 [2024-04-17 10:29:10.137151] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.992 [2024-04-17 10:29:10.137355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.992 [2024-04-17 10:29:10.137380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.992 [2024-04-17 10:29:10.142123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.992 [2024-04-17 10:29:10.142227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.992 [2024-04-17 10:29:10.142251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.992 [2024-04-17 10:29:10.147130] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.992 [2024-04-17 10:29:10.147315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.992 [2024-04-17 10:29:10.147339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.992 [2024-04-17 10:29:10.152137] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.992 [2024-04-17 10:29:10.152262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.992 [2024-04-17 10:29:10.152286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.992 [2024-04-17 10:29:10.157091] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.992 [2024-04-17 10:29:10.157205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.992 [2024-04-17 10:29:10.157228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.992 [2024-04-17 10:29:10.162067] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.992 [2024-04-17 10:29:10.162216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.992 [2024-04-17 10:29:10.162240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.992 [2024-04-17 10:29:10.167132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.992 [2024-04-17 10:29:10.167341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.992 [2024-04-17 10:29:10.167366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.992 [2024-04-17 10:29:10.172244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.992 [2024-04-17 10:29:10.172541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.992 [2024-04-17 10:29:10.172566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.992 [2024-04-17 10:29:10.177251] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.992 [2024-04-17 10:29:10.177491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.992 [2024-04-17 10:29:10.177516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.992 [2024-04-17 10:29:10.182182] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.992 [2024-04-17 10:29:10.182306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.992 [2024-04-17 10:29:10.182329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.992 [2024-04-17 10:29:10.187171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.992 [2024-04-17 10:29:10.187337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.992 [2024-04-17 10:29:10.187364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.992 [2024-04-17 10:29:10.192235] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.992 [2024-04-17 10:29:10.192347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.992 [2024-04-17 10:29:10.192370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.993 [2024-04-17 10:29:10.197201] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.993 [2024-04-17 10:29:10.197321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.993 [2024-04-17 10:29:10.197344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.993 [2024-04-17 10:29:10.202202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.993 [2024-04-17 10:29:10.202359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.993 [2024-04-17 10:29:10.202384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.993 [2024-04-17 10:29:10.207240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.993 [2024-04-17 10:29:10.207454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.993 [2024-04-17 10:29:10.207479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.993 [2024-04-17 10:29:10.212276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.993 [2024-04-17 10:29:10.212528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.993 [2024-04-17 10:29:10.212553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.993 [2024-04-17 10:29:10.217234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.993 [2024-04-17 10:29:10.217447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.993 [2024-04-17 10:29:10.217471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.993 [2024-04-17 10:29:10.222231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.993 [2024-04-17 10:29:10.222330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.993 [2024-04-17 10:29:10.222354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.993 [2024-04-17 10:29:10.227199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.993 [2024-04-17 10:29:10.227362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.993 [2024-04-17 10:29:10.227386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.993 [2024-04-17 10:29:10.232178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.993 [2024-04-17 10:29:10.232297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.993 [2024-04-17 10:29:10.232321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.993 [2024-04-17 10:29:10.237125] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.993 [2024-04-17 10:29:10.237233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.993 [2024-04-17 10:29:10.237257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.993 [2024-04-17 10:29:10.242150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.993 [2024-04-17 10:29:10.242328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.993 [2024-04-17 10:29:10.242352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.993 [2024-04-17 10:29:10.247878] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.993 [2024-04-17 10:29:10.248160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.993 [2024-04-17 10:29:10.248185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.993 [2024-04-17 10:29:10.254812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.993 [2024-04-17 10:29:10.255128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.993 [2024-04-17 10:29:10.255156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.993 [2024-04-17 10:29:10.261072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.993 [2024-04-17 10:29:10.261384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.993 [2024-04-17 10:29:10.261410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.993 [2024-04-17 10:29:10.266780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.993 [2024-04-17 10:29:10.266951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.993 [2024-04-17 10:29:10.266975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.993 [2024-04-17 10:29:10.271789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.993 [2024-04-17 10:29:10.271940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.993 [2024-04-17 10:29:10.271964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.993 [2024-04-17 10:29:10.276912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.993 [2024-04-17 10:29:10.277035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.993 [2024-04-17 10:29:10.277058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.993 [2024-04-17 10:29:10.282001] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.993 [2024-04-17 10:29:10.282160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.993 [2024-04-17 10:29:10.282183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.993 [2024-04-17 10:29:10.287065] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.993 [2024-04-17 10:29:10.287240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.993 [2024-04-17 10:29:10.287264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.993 [2024-04-17 10:29:10.292240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.993 [2024-04-17 10:29:10.292457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.993 [2024-04-17 10:29:10.292482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.993 [2024-04-17 10:29:10.297449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.993 [2024-04-17 10:29:10.297729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.993 [2024-04-17 10:29:10.297755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.993 [2024-04-17 10:29:10.302588] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.993 [2024-04-17 10:29:10.302864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.993 [2024-04-17 10:29:10.302890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.993 [2024-04-17 10:29:10.307597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.993 [2024-04-17 10:29:10.307768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.993 [2024-04-17 10:29:10.307792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:36.993 [2024-04-17 10:29:10.312683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.993 [2024-04-17 10:29:10.312895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.993 [2024-04-17 10:29:10.312920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.993 [2024-04-17 10:29:10.317683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:36.993 [2024-04-17 10:29:10.317850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.993 [2024-04-17 10:29:10.317873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.252 [2024-04-17 10:29:10.322587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.252 [2024-04-17 10:29:10.322731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.252 [2024-04-17 10:29:10.322758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.252 [2024-04-17 10:29:10.328162] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.252 [2024-04-17 10:29:10.328407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.252 [2024-04-17 10:29:10.328432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.252 [2024-04-17 10:29:10.335184] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.252 [2024-04-17 10:29:10.335425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.252 [2024-04-17 10:29:10.335450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.252 [2024-04-17 10:29:10.340560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.252 [2024-04-17 10:29:10.340851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.252 [2024-04-17 10:29:10.340877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.252 [2024-04-17 10:29:10.345936] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.252 [2024-04-17 10:29:10.346142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.252 [2024-04-17 10:29:10.346167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.252 [2024-04-17 10:29:10.351077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.252 [2024-04-17 10:29:10.351234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.252 [2024-04-17 10:29:10.351257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.252 [2024-04-17 10:29:10.356252] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.252 [2024-04-17 10:29:10.356472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.252 [2024-04-17 10:29:10.356497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.252 [2024-04-17 10:29:10.361309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.252 [2024-04-17 10:29:10.361462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.252 [2024-04-17 10:29:10.361486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.252 [2024-04-17 10:29:10.366314] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.252 [2024-04-17 10:29:10.366473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.253 [2024-04-17 10:29:10.366497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.253 [2024-04-17 10:29:10.371400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.253 [2024-04-17 10:29:10.371601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.253 [2024-04-17 10:29:10.371625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.253 [2024-04-17 10:29:10.376481] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.253 [2024-04-17 10:29:10.376774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.253 [2024-04-17 10:29:10.376801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.253 [2024-04-17 10:29:10.381755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.253 [2024-04-17 10:29:10.382037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.253 [2024-04-17 10:29:10.382063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.253 [2024-04-17 10:29:10.386809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.253 [2024-04-17 10:29:10.387010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.253 [2024-04-17 10:29:10.387034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.253 [2024-04-17 10:29:10.391897] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.253 [2024-04-17 10:29:10.392046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.253 [2024-04-17 10:29:10.392070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.253 [2024-04-17 10:29:10.396954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.253 [2024-04-17 10:29:10.397147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.253 [2024-04-17 10:29:10.397171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.253 [2024-04-17 10:29:10.402161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.253 [2024-04-17 10:29:10.402349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.253 [2024-04-17 10:29:10.402373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.253 [2024-04-17 10:29:10.408262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.253 [2024-04-17 10:29:10.408342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.253 [2024-04-17 10:29:10.408366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.253 [2024-04-17 10:29:10.414415] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.253 [2024-04-17 10:29:10.414575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.253 [2024-04-17 10:29:10.414602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.253 [2024-04-17 10:29:10.419507] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.253 [2024-04-17 10:29:10.419713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.253 [2024-04-17 10:29:10.419737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.253 [2024-04-17 10:29:10.424618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.253 [2024-04-17 10:29:10.424900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.253 [2024-04-17 10:29:10.424926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.253 [2024-04-17 10:29:10.429633] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.253 [2024-04-17 10:29:10.429841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.253 [2024-04-17 10:29:10.429865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.253 [2024-04-17 10:29:10.434697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.253 [2024-04-17 10:29:10.434820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.253 [2024-04-17 10:29:10.434843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.253 [2024-04-17 10:29:10.440233] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.253 [2024-04-17 10:29:10.440388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.253 [2024-04-17 10:29:10.440411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.253 [2024-04-17 10:29:10.445932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.253 [2024-04-17 10:29:10.446062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.253 [2024-04-17 10:29:10.446087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.253 [2024-04-17 10:29:10.450952] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.253 [2024-04-17 10:29:10.451061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.253 [2024-04-17 10:29:10.451085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.253 [2024-04-17 10:29:10.455998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.253 [2024-04-17 10:29:10.456131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.253 [2024-04-17 10:29:10.456154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.253 [2024-04-17 10:29:10.461074] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.253 [2024-04-17 10:29:10.461281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.253 [2024-04-17 10:29:10.461304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.253 [2024-04-17 10:29:10.466255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.253 [2024-04-17 10:29:10.466549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.253 [2024-04-17 10:29:10.466575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.253 [2024-04-17 10:29:10.471294] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.253 [2024-04-17 10:29:10.471519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.253 [2024-04-17 10:29:10.471544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.253 [2024-04-17 10:29:10.476267] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.253 [2024-04-17 10:29:10.476371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.253 [2024-04-17 10:29:10.476394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:37.253 [2024-04-17 10:29:10.481235] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1de6410) with pdu=0x2000190fef90 00:32:37.253 [2024-04-17 10:29:10.481386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.253 [2024-04-17 10:29:10.481408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:37.253 00:32:37.253 Latency(us) 00:32:37.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.253 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:37.253 nvme0n1 : 2.00 5380.09 672.51 0.00 0.00 2968.58 2085.24 15609.48 00:32:37.253 =================================================================================================================== 00:32:37.253 Total : 5380.09 672.51 0.00 0.00 2968.58 2085.24 15609.48 00:32:37.253 0 00:32:37.253 10:29:10 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:37.253 10:29:10 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:37.253 10:29:10 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:37.253 | .driver_specific 00:32:37.253 | .nvme_error 00:32:37.253 | .status_code 00:32:37.253 | .command_transient_transport_error' 00:32:37.254 10:29:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:37.512 10:29:10 -- host/digest.sh@71 -- # (( 347 > 0 )) 00:32:37.512 10:29:10 -- host/digest.sh@73 -- # killprocess 3644644 00:32:37.512 10:29:10 -- common/autotest_common.sh@926 -- # '[' -z 3644644 ']' 00:32:37.512 10:29:10 -- common/autotest_common.sh@930 -- # kill -0 3644644 00:32:37.512 10:29:10 -- common/autotest_common.sh@931 -- # uname 00:32:37.512 10:29:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:37.512 10:29:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3644644 00:32:37.512 10:29:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:37.512 10:29:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:37.512 10:29:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3644644' 00:32:37.512 killing process with pid 3644644 00:32:37.512 10:29:10 -- common/autotest_common.sh@945 -- # kill 3644644 00:32:37.512 Received shutdown signal, test time was about 2.000000 seconds 00:32:37.512 00:32:37.512 Latency(us) 00:32:37.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.512 =================================================================================================================== 00:32:37.513 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:37.513 10:29:10 -- common/autotest_common.sh@950 -- # wait 3644644 00:32:37.771 10:29:11 -- host/digest.sh@115 -- # killprocess 3642367 00:32:37.771 10:29:11 -- common/autotest_common.sh@926 -- # '[' -z 3642367 ']' 00:32:37.771 10:29:11 -- common/autotest_common.sh@930 -- # kill -0 3642367 00:32:37.771 10:29:11 -- common/autotest_common.sh@931 -- # uname 00:32:37.771 10:29:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:37.771 10:29:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3642367 00:32:37.771 10:29:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:37.771 10:29:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:37.771 10:29:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3642367' 00:32:37.771 killing process with pid 3642367 00:32:37.771 10:29:11 -- common/autotest_common.sh@945 -- # kill 3642367 00:32:37.771 10:29:11 -- common/autotest_common.sh@950 -- # wait 3642367 00:32:38.030 00:32:38.030 real 0m17.791s 00:32:38.030 user 0m35.949s 00:32:38.030 sys 0m4.487s 00:32:38.030 10:29:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:38.030 10:29:11 -- common/autotest_common.sh@10 -- # set +x 00:32:38.030 ************************************ 00:32:38.030 END TEST nvmf_digest_error 00:32:38.030 ************************************ 00:32:38.030 10:29:11 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:32:38.030 10:29:11 -- host/digest.sh@139 -- # nvmftestfini 00:32:38.030 10:29:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:38.030 10:29:11 -- nvmf/common.sh@116 -- # sync 00:32:38.030 10:29:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:38.030 10:29:11 -- nvmf/common.sh@119 -- # set +e 00:32:38.030 10:29:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:38.030 10:29:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:38.030 rmmod nvme_tcp 00:32:38.289 rmmod nvme_fabrics 00:32:38.289 rmmod nvme_keyring 00:32:38.289 10:29:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:38.289 10:29:11 -- nvmf/common.sh@123 -- # set -e 00:32:38.289 10:29:11 -- nvmf/common.sh@124 -- # return 0 00:32:38.289 10:29:11 -- nvmf/common.sh@477 -- # '[' -n 3642367 ']' 00:32:38.289 10:29:11 -- nvmf/common.sh@478 -- # killprocess 3642367 00:32:38.289 10:29:11 -- common/autotest_common.sh@926 -- # '[' -z 3642367 ']' 00:32:38.289 10:29:11 -- common/autotest_common.sh@930 -- # kill -0 3642367 00:32:38.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3642367) - No such process 00:32:38.289 10:29:11 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3642367 is not found' 00:32:38.289 Process with pid 3642367 is not found 00:32:38.289 10:29:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:38.289 10:29:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:38.289 10:29:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:38.289 10:29:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:38.289 10:29:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:38.289 10:29:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.289 10:29:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:38.289 10:29:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:40.194 10:29:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:40.194 00:32:40.194 real 0m41.682s 00:32:40.194 user 1m8.357s 00:32:40.194 sys 0m13.182s 00:32:40.194 10:29:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:40.194 10:29:13 -- common/autotest_common.sh@10 -- # set +x 00:32:40.194 ************************************ 00:32:40.194 END TEST nvmf_digest 00:32:40.194 ************************************ 00:32:40.194 10:29:13 -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:32:40.194 10:29:13 -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:32:40.194 10:29:13 -- nvmf/nvmf.sh@119 -- # [[ phy == phy ]] 00:32:40.194 10:29:13 -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:40.194 10:29:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:40.194 10:29:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:40.194 10:29:13 -- common/autotest_common.sh@10 -- # set +x 00:32:40.194 ************************************ 00:32:40.194 START TEST nvmf_bdevperf 00:32:40.194 ************************************ 00:32:40.194 10:29:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:40.453 * Looking for test storage... 00:32:40.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:40.453 10:29:13 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:40.453 10:29:13 -- nvmf/common.sh@7 -- # uname -s 00:32:40.453 10:29:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:40.453 10:29:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:40.453 10:29:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:40.453 10:29:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:40.453 10:29:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:40.453 10:29:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:40.453 10:29:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:40.453 10:29:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:40.453 10:29:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:40.453 10:29:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:40.453 10:29:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:40.453 10:29:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:40.453 10:29:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:40.453 10:29:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:40.453 10:29:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:40.453 10:29:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:40.453 10:29:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:40.453 10:29:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:40.453 10:29:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:40.453 10:29:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.453 10:29:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.453 10:29:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.453 10:29:13 -- paths/export.sh@5 -- # export PATH 00:32:40.453 10:29:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.453 10:29:13 -- nvmf/common.sh@46 -- # : 0 00:32:40.453 10:29:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:40.453 10:29:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:40.453 10:29:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:40.453 10:29:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:40.453 10:29:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:40.453 10:29:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:40.453 10:29:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:40.453 10:29:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:40.453 10:29:13 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:40.453 10:29:13 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:40.453 10:29:13 -- host/bdevperf.sh@24 -- # nvmftestinit 00:32:40.453 10:29:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:40.453 10:29:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:40.453 10:29:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:40.453 10:29:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:40.453 10:29:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:40.453 10:29:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:40.453 10:29:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:40.453 10:29:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:40.453 10:29:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:40.453 10:29:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:40.453 10:29:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:40.453 10:29:13 -- common/autotest_common.sh@10 -- # set +x 00:32:45.728 10:29:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:45.728 10:29:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:45.728 10:29:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:45.728 10:29:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:45.728 10:29:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:45.728 10:29:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:45.728 10:29:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:45.728 10:29:18 -- nvmf/common.sh@294 -- # net_devs=() 00:32:45.728 10:29:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:45.728 10:29:18 -- nvmf/common.sh@295 -- # e810=() 00:32:45.728 10:29:18 -- nvmf/common.sh@295 -- # local -ga e810 00:32:45.728 10:29:18 -- nvmf/common.sh@296 -- # x722=() 00:32:45.728 10:29:18 -- nvmf/common.sh@296 -- # local -ga x722 00:32:45.728 10:29:18 -- nvmf/common.sh@297 -- # mlx=() 00:32:45.728 10:29:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:45.728 10:29:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:45.728 10:29:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:45.728 10:29:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:45.728 10:29:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:45.728 10:29:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:45.728 10:29:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:45.728 10:29:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:45.728 10:29:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:45.728 10:29:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:45.728 10:29:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:45.728 10:29:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:45.728 10:29:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:45.728 10:29:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:45.728 10:29:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:45.728 10:29:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:45.728 10:29:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:45.728 10:29:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:45.728 10:29:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:45.728 10:29:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:45.728 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:45.728 10:29:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:45.728 10:29:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:45.728 10:29:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.728 10:29:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.728 10:29:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:45.728 10:29:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:45.728 10:29:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:45.728 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:45.728 10:29:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:45.728 10:29:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:45.728 10:29:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.728 10:29:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.728 10:29:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:45.728 10:29:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:45.728 10:29:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:45.728 10:29:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:45.728 10:29:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:45.728 10:29:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.728 10:29:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:45.728 10:29:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.728 10:29:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:45.728 Found net devices under 0000:af:00.0: cvl_0_0 00:32:45.728 10:29:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.728 10:29:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:45.728 10:29:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.728 10:29:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:45.728 10:29:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.729 10:29:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:45.729 Found net devices under 0000:af:00.1: cvl_0_1 00:32:45.729 10:29:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.729 10:29:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:45.729 10:29:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:45.729 10:29:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:45.729 10:29:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:45.729 10:29:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:45.729 10:29:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:45.729 10:29:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:45.729 10:29:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:45.729 10:29:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:45.729 10:29:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:45.729 10:29:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:45.729 10:29:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:45.729 10:29:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:45.729 10:29:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:45.729 10:29:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:45.729 10:29:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:45.729 10:29:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:45.729 10:29:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:45.989 10:29:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:45.989 10:29:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:45.989 10:29:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:45.989 10:29:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:45.989 10:29:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:45.989 10:29:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:45.989 10:29:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:45.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:45.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:32:45.989 00:32:45.989 --- 10.0.0.2 ping statistics --- 00:32:45.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.989 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:32:45.989 10:29:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:45.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:45.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:32:45.989 00:32:45.989 --- 10.0.0.1 ping statistics --- 00:32:45.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.989 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:32:45.989 10:29:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:45.989 10:29:19 -- nvmf/common.sh@410 -- # return 0 00:32:45.989 10:29:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:45.989 10:29:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:45.989 10:29:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:45.989 10:29:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:45.989 10:29:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:45.989 10:29:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:45.989 10:29:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:45.989 10:29:19 -- host/bdevperf.sh@25 -- # tgt_init 00:32:45.989 10:29:19 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:45.989 10:29:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:45.989 10:29:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:45.989 10:29:19 -- common/autotest_common.sh@10 -- # set +x 00:32:45.989 10:29:19 -- nvmf/common.sh@469 -- # nvmfpid=3649079 00:32:45.989 10:29:19 -- nvmf/common.sh@470 -- # waitforlisten 3649079 00:32:45.989 10:29:19 -- common/autotest_common.sh@819 -- # '[' -z 3649079 ']' 00:32:45.989 10:29:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:45.989 10:29:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:45.989 10:29:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:45.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:45.989 10:29:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:45.989 10:29:19 -- common/autotest_common.sh@10 -- # set +x 00:32:45.989 10:29:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:45.989 [2024-04-17 10:29:19.305704] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:45.989 [2024-04-17 10:29:19.305760] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:46.248 EAL: No free 2048 kB hugepages reported on node 1 00:32:46.248 [2024-04-17 10:29:19.385587] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:46.248 [2024-04-17 10:29:19.474553] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:46.248 [2024-04-17 10:29:19.474703] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:46.248 [2024-04-17 10:29:19.474716] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:46.248 [2024-04-17 10:29:19.474726] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:46.248 [2024-04-17 10:29:19.474837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:46.248 [2024-04-17 10:29:19.474954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:46.248 [2024-04-17 10:29:19.474955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:47.214 10:29:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:47.214 10:29:20 -- common/autotest_common.sh@852 -- # return 0 00:32:47.214 10:29:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:47.214 10:29:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:47.214 10:29:20 -- common/autotest_common.sh@10 -- # set +x 00:32:47.214 10:29:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:47.214 10:29:20 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:47.214 10:29:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:47.214 10:29:20 -- common/autotest_common.sh@10 -- # set +x 00:32:47.214 [2024-04-17 10:29:20.286149] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:47.214 10:29:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:47.214 10:29:20 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:47.214 10:29:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:47.214 10:29:20 -- common/autotest_common.sh@10 -- # set +x 00:32:47.214 Malloc0 00:32:47.214 10:29:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:47.214 10:29:20 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:47.214 10:29:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:47.214 10:29:20 -- common/autotest_common.sh@10 -- # set +x 00:32:47.214 10:29:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:47.214 10:29:20 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:47.214 10:29:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:47.214 10:29:20 -- common/autotest_common.sh@10 -- # set +x 00:32:47.214 10:29:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:47.214 10:29:20 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:47.214 10:29:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:47.215 10:29:20 -- common/autotest_common.sh@10 -- # set +x 00:32:47.215 [2024-04-17 10:29:20.347938] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:47.215 10:29:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:47.215 10:29:20 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:32:47.215 10:29:20 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:32:47.215 10:29:20 -- nvmf/common.sh@520 -- # config=() 00:32:47.215 10:29:20 -- nvmf/common.sh@520 -- # local subsystem config 00:32:47.215 10:29:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:47.215 10:29:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:47.215 { 00:32:47.215 "params": { 00:32:47.215 "name": "Nvme$subsystem", 00:32:47.215 "trtype": "$TEST_TRANSPORT", 00:32:47.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:47.215 "adrfam": "ipv4", 00:32:47.215 "trsvcid": "$NVMF_PORT", 00:32:47.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:47.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:47.215 "hdgst": ${hdgst:-false}, 00:32:47.215 "ddgst": ${ddgst:-false} 00:32:47.215 }, 00:32:47.215 "method": "bdev_nvme_attach_controller" 00:32:47.215 } 00:32:47.215 EOF 00:32:47.215 )") 00:32:47.215 10:29:20 -- nvmf/common.sh@542 -- # cat 00:32:47.215 10:29:20 -- nvmf/common.sh@544 -- # jq . 00:32:47.215 10:29:20 -- nvmf/common.sh@545 -- # IFS=, 00:32:47.215 10:29:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:47.215 "params": { 00:32:47.215 "name": "Nvme1", 00:32:47.215 "trtype": "tcp", 00:32:47.215 "traddr": "10.0.0.2", 00:32:47.215 "adrfam": "ipv4", 00:32:47.215 "trsvcid": "4420", 00:32:47.215 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:47.215 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:47.215 "hdgst": false, 00:32:47.215 "ddgst": false 00:32:47.215 }, 00:32:47.215 "method": "bdev_nvme_attach_controller" 00:32:47.215 }' 00:32:47.215 [2024-04-17 10:29:20.397839] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:47.215 [2024-04-17 10:29:20.397896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3649159 ] 00:32:47.215 EAL: No free 2048 kB hugepages reported on node 1 00:32:47.215 [2024-04-17 10:29:20.478887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:47.483 [2024-04-17 10:29:20.563530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.483 Running I/O for 1 seconds... 00:32:48.858 00:32:48.858 Latency(us) 00:32:48.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:48.858 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:48.858 Verification LBA range: start 0x0 length 0x4000 00:32:48.858 Nvme1n1 : 1.01 11480.44 44.85 0.00 0.00 11090.91 1608.61 16920.20 00:32:48.858 =================================================================================================================== 00:32:48.858 Total : 11480.44 44.85 0.00 0.00 11090.91 1608.61 16920.20 00:32:48.858 10:29:21 -- host/bdevperf.sh@30 -- # bdevperfpid=3649478 00:32:48.858 10:29:21 -- host/bdevperf.sh@32 -- # sleep 3 00:32:48.858 10:29:21 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:32:48.859 10:29:21 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:32:48.859 10:29:21 -- nvmf/common.sh@520 -- # config=() 00:32:48.859 10:29:21 -- nvmf/common.sh@520 -- # local subsystem config 00:32:48.859 10:29:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:48.859 10:29:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:48.859 { 00:32:48.859 "params": { 00:32:48.859 "name": "Nvme$subsystem", 00:32:48.859 "trtype": "$TEST_TRANSPORT", 00:32:48.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:48.859 "adrfam": "ipv4", 00:32:48.859 "trsvcid": "$NVMF_PORT", 00:32:48.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:48.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:48.859 "hdgst": ${hdgst:-false}, 00:32:48.859 "ddgst": ${ddgst:-false} 00:32:48.859 }, 00:32:48.859 "method": "bdev_nvme_attach_controller" 00:32:48.859 } 00:32:48.859 EOF 00:32:48.859 )") 00:32:48.859 10:29:21 -- nvmf/common.sh@542 -- # cat 00:32:48.859 10:29:22 -- nvmf/common.sh@544 -- # jq . 00:32:48.859 10:29:22 -- nvmf/common.sh@545 -- # IFS=, 00:32:48.859 10:29:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:48.859 "params": { 00:32:48.859 "name": "Nvme1", 00:32:48.859 "trtype": "tcp", 00:32:48.859 "traddr": "10.0.0.2", 00:32:48.859 "adrfam": "ipv4", 00:32:48.859 "trsvcid": "4420", 00:32:48.859 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:48.859 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:48.859 "hdgst": false, 00:32:48.859 "ddgst": false 00:32:48.859 }, 00:32:48.859 "method": "bdev_nvme_attach_controller" 00:32:48.859 }' 00:32:48.859 [2024-04-17 10:29:22.036575] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:48.859 [2024-04-17 10:29:22.036637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3649478 ] 00:32:48.859 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.859 [2024-04-17 10:29:22.118222] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.118 [2024-04-17 10:29:22.202305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.118 Running I/O for 15 seconds... 00:32:52.413 10:29:24 -- host/bdevperf.sh@33 -- # kill -9 3649079 00:32:52.413 10:29:24 -- host/bdevperf.sh@35 -- # sleep 3 00:32:52.413 [2024-04-17 10:29:25.009786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.413 [2024-04-17 10:29:25.009828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.413 [2024-04-17 10:29:25.009850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.413 [2024-04-17 10:29:25.009865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.413 [2024-04-17 10:29:25.009879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.413 [2024-04-17 10:29:25.009890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.413 [2024-04-17 10:29:25.009904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.413 [2024-04-17 10:29:25.009916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.413 [2024-04-17 10:29:25.009930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.413 [2024-04-17 10:29:25.009941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.413 [2024-04-17 10:29:25.009954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.413 [2024-04-17 10:29:25.009965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.413 [2024-04-17 10:29:25.009980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.413 [2024-04-17 10:29:25.009992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.413 [2024-04-17 10:29:25.010009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.413 [2024-04-17 10:29:25.010020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.413 [2024-04-17 10:29:25.010034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.413 [2024-04-17 10:29:25.010052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.413 [2024-04-17 10:29:25.010067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.413 [2024-04-17 10:29:25.010081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.413 [2024-04-17 10:29:25.010094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.413 [2024-04-17 10:29:25.010105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.413 [2024-04-17 10:29:25.010118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.413 [2024-04-17 10:29:25.010133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.413 [2024-04-17 10:29:25.010150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.413 [2024-04-17 10:29:25.010161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.413 [2024-04-17 10:29:25.010175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.413 [2024-04-17 10:29:25.010188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.413 [2024-04-17 10:29:25.010202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.413 [2024-04-17 10:29:25.010214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.413 [2024-04-17 10:29:25.010233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.413 [2024-04-17 10:29:25.010245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.413 [2024-04-17 10:29:25.010261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.413 [2024-04-17 10:29:25.010274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.413 [2024-04-17 10:29:25.010292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.413 [2024-04-17 10:29:25.010304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.413 [2024-04-17 10:29:25.010318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.413 [2024-04-17 10:29:25.010330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.413 [2024-04-17 10:29:25.010346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.413 [2024-04-17 10:29:25.010360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.413 [2024-04-17 10:29:25.010375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.413 [2024-04-17 10:29:25.010387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.413 [2024-04-17 10:29:25.010404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.413 [2024-04-17 10:29:25.010416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.413 [2024-04-17 10:29:25.010428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.413 [2024-04-17 10:29:25.010437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.413 [2024-04-17 10:29:25.010449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.413 [2024-04-17 10:29:25.010458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.413 [2024-04-17 10:29:25.010470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.414 [2024-04-17 10:29:25.010480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.010491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.414 [2024-04-17 10:29:25.010501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.010513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.414 [2024-04-17 10:29:25.010523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.010535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.414 [2024-04-17 10:29:25.010546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.010557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.414 [2024-04-17 10:29:25.010567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.010578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.414 [2024-04-17 10:29:25.010588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.010601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.414 [2024-04-17 10:29:25.010611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.010623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:111296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.414 [2024-04-17 10:29:25.010632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.010650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.414 [2024-04-17 10:29:25.010660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.010672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.414 [2024-04-17 10:29:25.010685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.010697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.414 [2024-04-17 10:29:25.010707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.010719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.414 [2024-04-17 10:29:25.010729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.010740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:111336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.414 [2024-04-17 10:29:25.010752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.010764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:111352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.414 [2024-04-17 10:29:25.010773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.010787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:111368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.414 [2024-04-17 10:29:25.010797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.010809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:111376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.414 [2024-04-17 10:29:25.010819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.010831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.414 [2024-04-17 10:29:25.010840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.010852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.414 [2024-04-17 10:29:25.010862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.010874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:111408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.414 [2024-04-17 10:29:25.010883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.010895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.414 [2024-04-17 10:29:25.010904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.010916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.414 [2024-04-17 10:29:25.010926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.010938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.414 [2024-04-17 10:29:25.010948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.010961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.414 [2024-04-17 10:29:25.010971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.010984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.414 [2024-04-17 10:29:25.010993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.011004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.414 [2024-04-17 10:29:25.011014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.011027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.414 [2024-04-17 10:29:25.011037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.011049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.414 [2024-04-17 10:29:25.011058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.011070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.414 [2024-04-17 10:29:25.011079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.011091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.414 [2024-04-17 10:29:25.011101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.011113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.414 [2024-04-17 10:29:25.011123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.011135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.414 [2024-04-17 10:29:25.011145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.011158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.414 [2024-04-17 10:29:25.011167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.011179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.414 [2024-04-17 10:29:25.011190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.011201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.414 [2024-04-17 10:29:25.011211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.011223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:111536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.414 [2024-04-17 10:29:25.011235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.011247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.414 [2024-04-17 10:29:25.011257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.011268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.414 [2024-04-17 10:29:25.011278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.011290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.414 [2024-04-17 10:29:25.011300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.414 [2024-04-17 10:29:25.011312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.011322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.011344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.011365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.011387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.011409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.011430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.011452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.011473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.011495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.011519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.011540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.011562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.011583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.011605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.011626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.011653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.011676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.415 [2024-04-17 10:29:25.011698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.011719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.415 [2024-04-17 10:29:25.011742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:111600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.011764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.011786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:111616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.011809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:111624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.011832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.415 [2024-04-17 10:29:25.011853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.011874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.415 [2024-04-17 10:29:25.011896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.415 [2024-04-17 10:29:25.011918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.011940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.011961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.011983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.011995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.012004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.012016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.012026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.012038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.012047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.012058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.012070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.012082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.012092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.012105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.012114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.012128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.415 [2024-04-17 10:29:25.012138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.012150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.012160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.415 [2024-04-17 10:29:25.012171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:111688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.415 [2024-04-17 10:29:25.012181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.416 [2024-04-17 10:29:25.012193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.416 [2024-04-17 10:29:25.012203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.416 [2024-04-17 10:29:25.012215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.416 [2024-04-17 10:29:25.012224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.416 [2024-04-17 10:29:25.012235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:111712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.416 [2024-04-17 10:29:25.012245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.416 [2024-04-17 10:29:25.012257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:111720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.416 [2024-04-17 10:29:25.012267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.416 [2024-04-17 10:29:25.012278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.416 [2024-04-17 10:29:25.012287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.416 [2024-04-17 10:29:25.012300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.416 [2024-04-17 10:29:25.012309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.416 [2024-04-17 10:29:25.012321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:111744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.416 [2024-04-17 10:29:25.012330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.416 [2024-04-17 10:29:25.012344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.416 [2024-04-17 10:29:25.012354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.416 [2024-04-17 10:29:25.012366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.416 [2024-04-17 10:29:25.012375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.416 [2024-04-17 10:29:25.012387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.416 [2024-04-17 10:29:25.012396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.416 [2024-04-17 10:29:25.012407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.416 [2024-04-17 10:29:25.012417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.416 [2024-04-17 10:29:25.012429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.416 [2024-04-17 10:29:25.012438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.416 [2024-04-17 10:29:25.012451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.416 [2024-04-17 10:29:25.012461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.416 [2024-04-17 10:29:25.012475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.416 [2024-04-17 10:29:25.012485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.416 [2024-04-17 10:29:25.012497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.416 [2024-04-17 10:29:25.012506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.416 [2024-04-17 10:29:25.012518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.416 [2024-04-17 10:29:25.012528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.416 [2024-04-17 10:29:25.012541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.416 [2024-04-17 10:29:25.012550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.416 [2024-04-17 10:29:25.012562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:52.416 [2024-04-17 10:29:25.012572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.416 [2024-04-17 10:29:25.012584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.416 [2024-04-17 10:29:25.012594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.416 [2024-04-17 10:29:25.012605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.416 [2024-04-17 10:29:25.012617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.416 [2024-04-17 10:29:25.012629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.416 [2024-04-17 10:29:25.012639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.416 [2024-04-17 10:29:25.012759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.416 [2024-04-17 10:29:25.012769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.416 [2024-04-17 10:29:25.012781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.416 [2024-04-17 10:29:25.012791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.416 [2024-04-17 10:29:25.012803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.416 [2024-04-17 10:29:25.012813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.416 [2024-04-17 10:29:25.012824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:111360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.416 [2024-04-17 10:29:25.012833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.416 [2024-04-17 10:29:25.012845] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1e0c0 is same with the state(5) to be set 00:32:52.416 [2024-04-17 10:29:25.012856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:52.416 [2024-04-17 10:29:25.012864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:52.416 [2024-04-17 10:29:25.012873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111384 len:8 PRP1 0x0 PRP2 0x0 00:32:52.416 [2024-04-17 10:29:25.012883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.416 [2024-04-17 10:29:25.012933] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f1e0c0 was disconnected and freed. reset controller. 00:32:52.416 [2024-04-17 10:29:25.015835] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.416 [2024-04-17 10:29:25.015892] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.416 [2024-04-17 10:29:25.016477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.416 [2024-04-17 10:29:25.016695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.416 [2024-04-17 10:29:25.016712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.416 [2024-04-17 10:29:25.016723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.416 [2024-04-17 10:29:25.016924] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.416 [2024-04-17 10:29:25.017053] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.416 [2024-04-17 10:29:25.017065] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.416 [2024-04-17 10:29:25.017077] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.416 [2024-04-17 10:29:25.019826] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.416 [2024-04-17 10:29:25.029143] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.416 [2024-04-17 10:29:25.029676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.416 [2024-04-17 10:29:25.029868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.416 [2024-04-17 10:29:25.029901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.416 [2024-04-17 10:29:25.029923] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.416 [2024-04-17 10:29:25.030253] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.416 [2024-04-17 10:29:25.030535] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.416 [2024-04-17 10:29:25.030559] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.416 [2024-04-17 10:29:25.030580] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.416 [2024-04-17 10:29:25.033521] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.416 [2024-04-17 10:29:25.042043] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.416 [2024-04-17 10:29:25.042453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.416 [2024-04-17 10:29:25.042808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.416 [2024-04-17 10:29:25.042841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.416 [2024-04-17 10:29:25.042863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.416 [2024-04-17 10:29:25.043143] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.416 [2024-04-17 10:29:25.043474] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.417 [2024-04-17 10:29:25.043503] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.417 [2024-04-17 10:29:25.043513] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.417 [2024-04-17 10:29:25.046295] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.417 [2024-04-17 10:29:25.055128] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.417 [2024-04-17 10:29:25.055639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.417 [2024-04-17 10:29:25.055844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.417 [2024-04-17 10:29:25.055876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.417 [2024-04-17 10:29:25.055899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.417 [2024-04-17 10:29:25.056266] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.417 [2024-04-17 10:29:25.056490] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.417 [2024-04-17 10:29:25.056507] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.417 [2024-04-17 10:29:25.056521] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.417 [2024-04-17 10:29:25.060288] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.417 [2024-04-17 10:29:25.068691] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.417 [2024-04-17 10:29:25.069123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.417 [2024-04-17 10:29:25.069261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.417 [2024-04-17 10:29:25.069277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.417 [2024-04-17 10:29:25.069287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.417 [2024-04-17 10:29:25.069462] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.417 [2024-04-17 10:29:25.069668] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.417 [2024-04-17 10:29:25.069682] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.417 [2024-04-17 10:29:25.069693] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.417 [2024-04-17 10:29:25.072349] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.417 [2024-04-17 10:29:25.081871] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.417 [2024-04-17 10:29:25.082211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.417 [2024-04-17 10:29:25.082420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.417 [2024-04-17 10:29:25.082436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.417 [2024-04-17 10:29:25.082447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.417 [2024-04-17 10:29:25.082575] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.417 [2024-04-17 10:29:25.082736] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.417 [2024-04-17 10:29:25.082750] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.417 [2024-04-17 10:29:25.082760] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.417 [2024-04-17 10:29:25.085347] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.417 [2024-04-17 10:29:25.094961] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.417 [2024-04-17 10:29:25.095504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.417 [2024-04-17 10:29:25.095740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.417 [2024-04-17 10:29:25.095774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.417 [2024-04-17 10:29:25.095796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.417 [2024-04-17 10:29:25.096062] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.417 [2024-04-17 10:29:25.096215] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.417 [2024-04-17 10:29:25.096228] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.417 [2024-04-17 10:29:25.096238] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.417 [2024-04-17 10:29:25.098988] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.417 [2024-04-17 10:29:25.107778] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.417 [2024-04-17 10:29:25.108179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.417 [2024-04-17 10:29:25.108439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.417 [2024-04-17 10:29:25.108455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.417 [2024-04-17 10:29:25.108466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.417 [2024-04-17 10:29:25.108617] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.417 [2024-04-17 10:29:25.108867] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.417 [2024-04-17 10:29:25.108881] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.417 [2024-04-17 10:29:25.108891] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.417 [2024-04-17 10:29:25.111629] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.417 [2024-04-17 10:29:25.120584] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.417 [2024-04-17 10:29:25.121070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.417 [2024-04-17 10:29:25.121277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.417 [2024-04-17 10:29:25.121292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.417 [2024-04-17 10:29:25.121304] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.417 [2024-04-17 10:29:25.121524] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.417 [2024-04-17 10:29:25.121774] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.417 [2024-04-17 10:29:25.121787] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.417 [2024-04-17 10:29:25.121798] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.417 [2024-04-17 10:29:25.124605] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.417 [2024-04-17 10:29:25.133659] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.417 [2024-04-17 10:29:25.134114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.417 [2024-04-17 10:29:25.134377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.417 [2024-04-17 10:29:25.134409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.417 [2024-04-17 10:29:25.134431] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.417 [2024-04-17 10:29:25.134821] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.417 [2024-04-17 10:29:25.135098] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.417 [2024-04-17 10:29:25.135111] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.417 [2024-04-17 10:29:25.135121] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.417 [2024-04-17 10:29:25.137899] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.417 [2024-04-17 10:29:25.146733] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.418 [2024-04-17 10:29:25.147183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.418 [2024-04-17 10:29:25.147426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.418 [2024-04-17 10:29:25.147457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.418 [2024-04-17 10:29:25.147486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.418 [2024-04-17 10:29:25.147832] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.418 [2024-04-17 10:29:25.148164] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.418 [2024-04-17 10:29:25.148182] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.418 [2024-04-17 10:29:25.148196] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.418 [2024-04-17 10:29:25.151958] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.418 [2024-04-17 10:29:25.160082] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.418 [2024-04-17 10:29:25.160499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.418 [2024-04-17 10:29:25.160839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.418 [2024-04-17 10:29:25.160872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.418 [2024-04-17 10:29:25.160894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.418 [2024-04-17 10:29:25.161274] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.418 [2024-04-17 10:29:25.161716] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.418 [2024-04-17 10:29:25.161742] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.418 [2024-04-17 10:29:25.161763] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.418 [2024-04-17 10:29:25.164915] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.418 [2024-04-17 10:29:25.172871] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.418 [2024-04-17 10:29:25.173335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.418 [2024-04-17 10:29:25.173655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.418 [2024-04-17 10:29:25.173689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.418 [2024-04-17 10:29:25.173712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.418 [2024-04-17 10:29:25.173906] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.418 [2024-04-17 10:29:25.174104] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.418 [2024-04-17 10:29:25.174117] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.418 [2024-04-17 10:29:25.174127] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.418 [2024-04-17 10:29:25.176896] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.418 [2024-04-17 10:29:25.185805] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.418 [2024-04-17 10:29:25.186299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.418 [2024-04-17 10:29:25.186584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.418 [2024-04-17 10:29:25.186615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.418 [2024-04-17 10:29:25.186637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.418 [2024-04-17 10:29:25.187086] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.418 [2024-04-17 10:29:25.187307] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.418 [2024-04-17 10:29:25.187320] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.418 [2024-04-17 10:29:25.187330] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.418 [2024-04-17 10:29:25.190140] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.418 [2024-04-17 10:29:25.198608] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.418 [2024-04-17 10:29:25.199045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.418 [2024-04-17 10:29:25.199287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.418 [2024-04-17 10:29:25.199319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.418 [2024-04-17 10:29:25.199341] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.418 [2024-04-17 10:29:25.199683] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.418 [2024-04-17 10:29:25.200067] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.418 [2024-04-17 10:29:25.200091] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.418 [2024-04-17 10:29:25.200112] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.418 [2024-04-17 10:29:25.202944] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.418 [2024-04-17 10:29:25.211681] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.418 [2024-04-17 10:29:25.212047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.418 [2024-04-17 10:29:25.212327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.418 [2024-04-17 10:29:25.212358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.418 [2024-04-17 10:29:25.212380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.418 [2024-04-17 10:29:25.212776] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.418 [2024-04-17 10:29:25.213085] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.418 [2024-04-17 10:29:25.213098] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.418 [2024-04-17 10:29:25.213107] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.418 [2024-04-17 10:29:25.215740] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.418 [2024-04-17 10:29:25.224796] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.418 [2024-04-17 10:29:25.225190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.418 [2024-04-17 10:29:25.225412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.418 [2024-04-17 10:29:25.225427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.418 [2024-04-17 10:29:25.225439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.418 [2024-04-17 10:29:25.225545] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.418 [2024-04-17 10:29:25.225682] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.418 [2024-04-17 10:29:25.225696] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.418 [2024-04-17 10:29:25.225705] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.418 [2024-04-17 10:29:25.228536] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.418 [2024-04-17 10:29:25.237714] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.418 [2024-04-17 10:29:25.237992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.418 [2024-04-17 10:29:25.238233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.418 [2024-04-17 10:29:25.238264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.418 [2024-04-17 10:29:25.238286] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.418 [2024-04-17 10:29:25.238777] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.418 [2024-04-17 10:29:25.239057] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.418 [2024-04-17 10:29:25.239070] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.418 [2024-04-17 10:29:25.239079] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.418 [2024-04-17 10:29:25.242946] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.418 [2024-04-17 10:29:25.251067] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.418 [2024-04-17 10:29:25.251462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.418 [2024-04-17 10:29:25.251764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.418 [2024-04-17 10:29:25.251797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.418 [2024-04-17 10:29:25.251820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.418 [2024-04-17 10:29:25.252198] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.418 [2024-04-17 10:29:25.252486] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.418 [2024-04-17 10:29:25.252499] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.418 [2024-04-17 10:29:25.252508] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.418 [2024-04-17 10:29:25.255297] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.418 [2024-04-17 10:29:25.264087] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.418 [2024-04-17 10:29:25.264541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.419 [2024-04-17 10:29:25.265989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.419 [2024-04-17 10:29:25.266019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.419 [2024-04-17 10:29:25.266032] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.419 [2024-04-17 10:29:25.266214] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.419 [2024-04-17 10:29:25.266417] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.419 [2024-04-17 10:29:25.266430] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.419 [2024-04-17 10:29:25.266440] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.419 [2024-04-17 10:29:25.269379] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.419 [2024-04-17 10:29:25.276982] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.419 [2024-04-17 10:29:25.277419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.419 [2024-04-17 10:29:25.277671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.419 [2024-04-17 10:29:25.277689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.419 [2024-04-17 10:29:25.277700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.419 [2024-04-17 10:29:25.277876] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.419 [2024-04-17 10:29:25.278097] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.419 [2024-04-17 10:29:25.278110] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.419 [2024-04-17 10:29:25.278121] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.419 [2024-04-17 10:29:25.280673] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.419 [2024-04-17 10:29:25.290286] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.419 [2024-04-17 10:29:25.290667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.419 [2024-04-17 10:29:25.290929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.419 [2024-04-17 10:29:25.290945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.419 [2024-04-17 10:29:25.290957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.419 [2024-04-17 10:29:25.291178] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.419 [2024-04-17 10:29:25.291353] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.419 [2024-04-17 10:29:25.291366] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.419 [2024-04-17 10:29:25.291376] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.419 [2024-04-17 10:29:25.294103] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.419 [2024-04-17 10:29:25.303494] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.419 [2024-04-17 10:29:25.304024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.419 [2024-04-17 10:29:25.304271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.419 [2024-04-17 10:29:25.304304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.419 [2024-04-17 10:29:25.304327] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.419 [2024-04-17 10:29:25.304819] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.419 [2024-04-17 10:29:25.305203] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.419 [2024-04-17 10:29:25.305227] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.419 [2024-04-17 10:29:25.305254] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.419 [2024-04-17 10:29:25.308123] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.419 [2024-04-17 10:29:25.316403] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.419 [2024-04-17 10:29:25.316816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.419 [2024-04-17 10:29:25.317061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.419 [2024-04-17 10:29:25.317093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.419 [2024-04-17 10:29:25.317116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.419 [2024-04-17 10:29:25.317400] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.419 [2024-04-17 10:29:25.317553] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.419 [2024-04-17 10:29:25.317565] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.419 [2024-04-17 10:29:25.317575] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.419 [2024-04-17 10:29:25.320457] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.419 [2024-04-17 10:29:25.329307] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.419 [2024-04-17 10:29:25.329811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.419 [2024-04-17 10:29:25.329978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.419 [2024-04-17 10:29:25.329994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.419 [2024-04-17 10:29:25.330004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.419 [2024-04-17 10:29:25.330178] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.419 [2024-04-17 10:29:25.330354] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.419 [2024-04-17 10:29:25.330367] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.419 [2024-04-17 10:29:25.330376] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.419 [2024-04-17 10:29:25.333327] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.419 [2024-04-17 10:29:25.342153] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.419 [2024-04-17 10:29:25.342674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.419 [2024-04-17 10:29:25.342865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.419 [2024-04-17 10:29:25.342897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.419 [2024-04-17 10:29:25.342920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.419 [2024-04-17 10:29:25.343399] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.419 [2024-04-17 10:29:25.343675] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.419 [2024-04-17 10:29:25.343689] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.419 [2024-04-17 10:29:25.343702] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.419 [2024-04-17 10:29:25.346888] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.419 [2024-04-17 10:29:25.355361] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.419 [2024-04-17 10:29:25.355810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.419 [2024-04-17 10:29:25.356019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.419 [2024-04-17 10:29:25.356051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.419 [2024-04-17 10:29:25.356074] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.419 [2024-04-17 10:29:25.356479] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.419 [2024-04-17 10:29:25.356586] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.419 [2024-04-17 10:29:25.356599] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.419 [2024-04-17 10:29:25.356609] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.419 [2024-04-17 10:29:25.359200] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.419 [2024-04-17 10:29:25.368368] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.419 [2024-04-17 10:29:25.368744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.419 [2024-04-17 10:29:25.368943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.419 [2024-04-17 10:29:25.368959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.419 [2024-04-17 10:29:25.368971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.419 [2024-04-17 10:29:25.369213] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.419 [2024-04-17 10:29:25.369612] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.419 [2024-04-17 10:29:25.369625] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.419 [2024-04-17 10:29:25.369634] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.419 [2024-04-17 10:29:25.372633] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.419 [2024-04-17 10:29:25.381209] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.419 [2024-04-17 10:29:25.381687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.419 [2024-04-17 10:29:25.381877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.419 [2024-04-17 10:29:25.381907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.419 [2024-04-17 10:29:25.381930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.419 [2024-04-17 10:29:25.382309] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.420 [2024-04-17 10:29:25.382641] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.420 [2024-04-17 10:29:25.382660] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.420 [2024-04-17 10:29:25.382670] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.420 [2024-04-17 10:29:25.385499] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.420 [2024-04-17 10:29:25.394141] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.420 [2024-04-17 10:29:25.394627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.420 [2024-04-17 10:29:25.394956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.420 [2024-04-17 10:29:25.394987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.420 [2024-04-17 10:29:25.395009] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.420 [2024-04-17 10:29:25.395339] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.420 [2024-04-17 10:29:25.395692] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.420 [2024-04-17 10:29:25.395705] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.420 [2024-04-17 10:29:25.395716] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.420 [2024-04-17 10:29:25.398499] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.420 [2024-04-17 10:29:25.407044] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.420 [2024-04-17 10:29:25.407460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.420 [2024-04-17 10:29:25.407743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.420 [2024-04-17 10:29:25.407777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.420 [2024-04-17 10:29:25.407799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.420 [2024-04-17 10:29:25.408179] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.420 [2024-04-17 10:29:25.408609] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.420 [2024-04-17 10:29:25.408635] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.420 [2024-04-17 10:29:25.408677] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.420 [2024-04-17 10:29:25.411437] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.420 [2024-04-17 10:29:25.420049] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.420 [2024-04-17 10:29:25.420521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.420 [2024-04-17 10:29:25.420764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.420 [2024-04-17 10:29:25.420797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.420 [2024-04-17 10:29:25.420820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.420 [2024-04-17 10:29:25.421163] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.420 [2024-04-17 10:29:25.421518] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.420 [2024-04-17 10:29:25.421537] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.420 [2024-04-17 10:29:25.421551] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.420 [2024-04-17 10:29:25.425511] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.420 [2024-04-17 10:29:25.433510] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.420 [2024-04-17 10:29:25.433992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.420 [2024-04-17 10:29:25.434339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.420 [2024-04-17 10:29:25.434370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.420 [2024-04-17 10:29:25.434392] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.420 [2024-04-17 10:29:25.434685] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.420 [2024-04-17 10:29:25.434817] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.420 [2024-04-17 10:29:25.434829] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.420 [2024-04-17 10:29:25.434839] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.420 [2024-04-17 10:29:25.437600] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.420 [2024-04-17 10:29:25.446532] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.420 [2024-04-17 10:29:25.446997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.420 [2024-04-17 10:29:25.447353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.420 [2024-04-17 10:29:25.447384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.420 [2024-04-17 10:29:25.447405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.420 [2024-04-17 10:29:25.447582] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.420 [2024-04-17 10:29:25.447810] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.420 [2024-04-17 10:29:25.447823] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.420 [2024-04-17 10:29:25.447833] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.420 [2024-04-17 10:29:25.450614] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.420 [2024-04-17 10:29:25.459465] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.420 [2024-04-17 10:29:25.459928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.420 [2024-04-17 10:29:25.460129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.420 [2024-04-17 10:29:25.460145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.420 [2024-04-17 10:29:25.460156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.420 [2024-04-17 10:29:25.460376] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.420 [2024-04-17 10:29:25.460550] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.420 [2024-04-17 10:29:25.460563] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.420 [2024-04-17 10:29:25.460573] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.420 [2024-04-17 10:29:25.463345] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.420 [2024-04-17 10:29:25.472448] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.420 [2024-04-17 10:29:25.472953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.420 [2024-04-17 10:29:25.473241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.420 [2024-04-17 10:29:25.473272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.420 [2024-04-17 10:29:25.473295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.420 [2024-04-17 10:29:25.473688] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.420 [2024-04-17 10:29:25.473944] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.420 [2024-04-17 10:29:25.473956] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.420 [2024-04-17 10:29:25.473966] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.420 [2024-04-17 10:29:25.476594] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.420 [2024-04-17 10:29:25.485362] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.420 [2024-04-17 10:29:25.485828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.420 [2024-04-17 10:29:25.486160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.420 [2024-04-17 10:29:25.486191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.420 [2024-04-17 10:29:25.486212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.420 [2024-04-17 10:29:25.486492] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.420 [2024-04-17 10:29:25.486816] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.420 [2024-04-17 10:29:25.486829] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.420 [2024-04-17 10:29:25.486840] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.420 [2024-04-17 10:29:25.489514] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.420 [2024-04-17 10:29:25.498191] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.420 [2024-04-17 10:29:25.498664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.420 [2024-04-17 10:29:25.498979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.420 [2024-04-17 10:29:25.499010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.420 [2024-04-17 10:29:25.499032] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.420 [2024-04-17 10:29:25.499312] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.420 [2024-04-17 10:29:25.499679] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.420 [2024-04-17 10:29:25.499693] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.420 [2024-04-17 10:29:25.499704] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.420 [2024-04-17 10:29:25.502442] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.421 [2024-04-17 10:29:25.511388] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.421 [2024-04-17 10:29:25.511818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.421 [2024-04-17 10:29:25.512186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.421 [2024-04-17 10:29:25.512218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.421 [2024-04-17 10:29:25.512247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.421 [2024-04-17 10:29:25.512693] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.421 [2024-04-17 10:29:25.512981] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.421 [2024-04-17 10:29:25.512998] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.421 [2024-04-17 10:29:25.513012] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.421 [2024-04-17 10:29:25.517232] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.421 [2024-04-17 10:29:25.524702] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.421 [2024-04-17 10:29:25.525182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.421 [2024-04-17 10:29:25.525418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.421 [2024-04-17 10:29:25.525449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.421 [2024-04-17 10:29:25.525473] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.421 [2024-04-17 10:29:25.525861] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.421 [2024-04-17 10:29:25.526060] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.421 [2024-04-17 10:29:25.526072] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.421 [2024-04-17 10:29:25.526082] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.421 [2024-04-17 10:29:25.528806] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.421 [2024-04-17 10:29:25.537867] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.421 [2024-04-17 10:29:25.538409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.421 [2024-04-17 10:29:25.538591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.421 [2024-04-17 10:29:25.538607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.421 [2024-04-17 10:29:25.538617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.421 [2024-04-17 10:29:25.538800] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.421 [2024-04-17 10:29:25.538954] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.421 [2024-04-17 10:29:25.538966] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.421 [2024-04-17 10:29:25.538976] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.421 [2024-04-17 10:29:25.541794] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.421 [2024-04-17 10:29:25.550666] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.421 [2024-04-17 10:29:25.551174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.421 [2024-04-17 10:29:25.551484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.421 [2024-04-17 10:29:25.551515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.421 [2024-04-17 10:29:25.551544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.421 [2024-04-17 10:29:25.551902] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.421 [2024-04-17 10:29:25.552056] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.421 [2024-04-17 10:29:25.552069] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.421 [2024-04-17 10:29:25.552078] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.421 [2024-04-17 10:29:25.554755] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.421 [2024-04-17 10:29:25.563699] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.421 [2024-04-17 10:29:25.564120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.421 [2024-04-17 10:29:25.564412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.421 [2024-04-17 10:29:25.564443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.421 [2024-04-17 10:29:25.564465] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.421 [2024-04-17 10:29:25.564858] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.421 [2024-04-17 10:29:25.565095] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.421 [2024-04-17 10:29:25.565108] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.421 [2024-04-17 10:29:25.565118] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.421 [2024-04-17 10:29:25.567839] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.421 [2024-04-17 10:29:25.576738] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.421 [2024-04-17 10:29:25.577222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.421 [2024-04-17 10:29:25.577546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.421 [2024-04-17 10:29:25.577578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.421 [2024-04-17 10:29:25.577601] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.421 [2024-04-17 10:29:25.577943] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.421 [2024-04-17 10:29:25.578212] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.421 [2024-04-17 10:29:25.578226] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.421 [2024-04-17 10:29:25.578235] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.421 [2024-04-17 10:29:25.581075] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.421 [2024-04-17 10:29:25.589790] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.421 [2024-04-17 10:29:25.590286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.421 [2024-04-17 10:29:25.590521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.421 [2024-04-17 10:29:25.590552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.421 [2024-04-17 10:29:25.590574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.421 [2024-04-17 10:29:25.590878] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.421 [2024-04-17 10:29:25.591055] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.421 [2024-04-17 10:29:25.591068] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.421 [2024-04-17 10:29:25.591077] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.421 [2024-04-17 10:29:25.593635] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.421 [2024-04-17 10:29:25.602634] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.421 [2024-04-17 10:29:25.603105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.421 [2024-04-17 10:29:25.603370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.421 [2024-04-17 10:29:25.603401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.421 [2024-04-17 10:29:25.603423] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.421 [2024-04-17 10:29:25.603812] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.421 [2024-04-17 10:29:25.604209] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.421 [2024-04-17 10:29:25.604226] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.421 [2024-04-17 10:29:25.604241] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.421 [2024-04-17 10:29:25.608397] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.421 [2024-04-17 10:29:25.616170] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.421 [2024-04-17 10:29:25.616521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.421 [2024-04-17 10:29:25.616722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.421 [2024-04-17 10:29:25.616738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.421 [2024-04-17 10:29:25.616749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.421 [2024-04-17 10:29:25.616946] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.421 [2024-04-17 10:29:25.617120] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.421 [2024-04-17 10:29:25.617133] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.421 [2024-04-17 10:29:25.617143] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.421 [2024-04-17 10:29:25.619866] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.421 [2024-04-17 10:29:25.628888] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.421 [2024-04-17 10:29:25.629228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.421 [2024-04-17 10:29:25.629515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.422 [2024-04-17 10:29:25.629545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.422 [2024-04-17 10:29:25.629568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.422 [2024-04-17 10:29:25.629925] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.422 [2024-04-17 10:29:25.630061] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.422 [2024-04-17 10:29:25.630074] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.422 [2024-04-17 10:29:25.630084] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.422 [2024-04-17 10:29:25.632901] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.422 [2024-04-17 10:29:25.641810] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.422 [2024-04-17 10:29:25.642169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.422 [2024-04-17 10:29:25.642452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.422 [2024-04-17 10:29:25.642484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.422 [2024-04-17 10:29:25.642505] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.422 [2024-04-17 10:29:25.643026] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.422 [2024-04-17 10:29:25.643248] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.422 [2024-04-17 10:29:25.643260] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.422 [2024-04-17 10:29:25.643270] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.422 [2024-04-17 10:29:25.646096] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.422 [2024-04-17 10:29:25.654802] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.422 [2024-04-17 10:29:25.655188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.422 [2024-04-17 10:29:25.655440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.422 [2024-04-17 10:29:25.655471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.422 [2024-04-17 10:29:25.655495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.422 [2024-04-17 10:29:25.656040] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.422 [2024-04-17 10:29:25.656301] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.422 [2024-04-17 10:29:25.656313] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.422 [2024-04-17 10:29:25.656323] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.422 [2024-04-17 10:29:25.658868] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.422 [2024-04-17 10:29:25.667925] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.422 [2024-04-17 10:29:25.668304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.422 [2024-04-17 10:29:25.668623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.422 [2024-04-17 10:29:25.668668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.422 [2024-04-17 10:29:25.668691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.422 [2024-04-17 10:29:25.669121] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.422 [2024-04-17 10:29:25.669406] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.422 [2024-04-17 10:29:25.669422] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.422 [2024-04-17 10:29:25.669433] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.422 [2024-04-17 10:29:25.672311] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.422 [2024-04-17 10:29:25.680772] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.422 [2024-04-17 10:29:25.681319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.422 [2024-04-17 10:29:25.681542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.422 [2024-04-17 10:29:25.681557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.422 [2024-04-17 10:29:25.681568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.422 [2024-04-17 10:29:25.681770] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.422 [2024-04-17 10:29:25.681947] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.422 [2024-04-17 10:29:25.681959] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.422 [2024-04-17 10:29:25.681969] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.422 [2024-04-17 10:29:25.684552] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.422 [2024-04-17 10:29:25.693695] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.422 [2024-04-17 10:29:25.694158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.422 [2024-04-17 10:29:25.694472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.422 [2024-04-17 10:29:25.694504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.422 [2024-04-17 10:29:25.694526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.422 [2024-04-17 10:29:25.694772] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.422 [2024-04-17 10:29:25.695155] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.422 [2024-04-17 10:29:25.695180] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.422 [2024-04-17 10:29:25.695199] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.422 [2024-04-17 10:29:25.698621] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.422 [2024-04-17 10:29:25.707019] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.422 [2024-04-17 10:29:25.707424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.422 [2024-04-17 10:29:25.707741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.422 [2024-04-17 10:29:25.707775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.422 [2024-04-17 10:29:25.707798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.422 [2024-04-17 10:29:25.708177] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.422 [2024-04-17 10:29:25.708508] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.422 [2024-04-17 10:29:25.708521] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.422 [2024-04-17 10:29:25.708535] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.422 [2024-04-17 10:29:25.711323] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.422 [2024-04-17 10:29:25.719968] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.422 [2024-04-17 10:29:25.720452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.422 [2024-04-17 10:29:25.720763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.422 [2024-04-17 10:29:25.720796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.422 [2024-04-17 10:29:25.720818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.422 [2024-04-17 10:29:25.721198] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.422 [2024-04-17 10:29:25.721440] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.422 [2024-04-17 10:29:25.721453] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.422 [2024-04-17 10:29:25.721463] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.422 [2024-04-17 10:29:25.724072] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.422 [2024-04-17 10:29:25.732982] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.422 [2024-04-17 10:29:25.733500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.422 [2024-04-17 10:29:25.733813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.423 [2024-04-17 10:29:25.733847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.423 [2024-04-17 10:29:25.733869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.423 [2024-04-17 10:29:25.734199] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.423 [2024-04-17 10:29:25.734329] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.423 [2024-04-17 10:29:25.734341] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.423 [2024-04-17 10:29:25.734351] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.423 [2024-04-17 10:29:25.737163] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.683 [2024-04-17 10:29:25.745916] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.683 [2024-04-17 10:29:25.746298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.683 [2024-04-17 10:29:25.746394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.683 [2024-04-17 10:29:25.746409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.683 [2024-04-17 10:29:25.746420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.683 [2024-04-17 10:29:25.746594] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.683 [2024-04-17 10:29:25.746800] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.683 [2024-04-17 10:29:25.746814] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.683 [2024-04-17 10:29:25.746824] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.683 [2024-04-17 10:29:25.749610] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.683 [2024-04-17 10:29:25.758999] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.683 [2024-04-17 10:29:25.759455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.683 [2024-04-17 10:29:25.759707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.683 [2024-04-17 10:29:25.759742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.683 [2024-04-17 10:29:25.759764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.683 [2024-04-17 10:29:25.760144] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.683 [2024-04-17 10:29:25.760346] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.683 [2024-04-17 10:29:25.760359] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.683 [2024-04-17 10:29:25.760369] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.683 [2024-04-17 10:29:25.763136] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.683 [2024-04-17 10:29:25.772068] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.683 [2024-04-17 10:29:25.772520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.683 [2024-04-17 10:29:25.772756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.683 [2024-04-17 10:29:25.772790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.683 [2024-04-17 10:29:25.772812] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.683 [2024-04-17 10:29:25.773191] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.683 [2024-04-17 10:29:25.773485] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.683 [2024-04-17 10:29:25.773498] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.683 [2024-04-17 10:29:25.773507] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.683 [2024-04-17 10:29:25.776117] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.683 [2024-04-17 10:29:25.784984] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.683 [2024-04-17 10:29:25.785405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.683 [2024-04-17 10:29:25.785687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.683 [2024-04-17 10:29:25.785721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.683 [2024-04-17 10:29:25.785743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.683 [2024-04-17 10:29:25.786221] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.683 [2024-04-17 10:29:25.786517] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.683 [2024-04-17 10:29:25.786533] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.683 [2024-04-17 10:29:25.786546] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.683 [2024-04-17 10:29:25.789951] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.683 [2024-04-17 10:29:25.798480] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.683 [2024-04-17 10:29:25.798939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.683 [2024-04-17 10:29:25.799192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.683 [2024-04-17 10:29:25.799223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.683 [2024-04-17 10:29:25.799245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.683 [2024-04-17 10:29:25.799694] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.683 [2024-04-17 10:29:25.799871] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.683 [2024-04-17 10:29:25.799884] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.683 [2024-04-17 10:29:25.799893] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.683 [2024-04-17 10:29:25.802659] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.683 [2024-04-17 10:29:25.811290] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.683 [2024-04-17 10:29:25.811731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.683 [2024-04-17 10:29:25.812046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.683 [2024-04-17 10:29:25.812078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.684 [2024-04-17 10:29:25.812100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.684 [2024-04-17 10:29:25.812332] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.684 [2024-04-17 10:29:25.812517] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.684 [2024-04-17 10:29:25.812530] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.684 [2024-04-17 10:29:25.812539] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.684 [2024-04-17 10:29:25.815220] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.684 [2024-04-17 10:29:25.824272] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.684 [2024-04-17 10:29:25.824795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.684 [2024-04-17 10:29:25.824974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.684 [2024-04-17 10:29:25.824990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.684 [2024-04-17 10:29:25.825000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.684 [2024-04-17 10:29:25.825152] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.684 [2024-04-17 10:29:25.825349] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.684 [2024-04-17 10:29:25.825362] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.684 [2024-04-17 10:29:25.825371] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.684 [2024-04-17 10:29:25.828051] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.684 [2024-04-17 10:29:25.837072] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.684 [2024-04-17 10:29:25.837558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.684 [2024-04-17 10:29:25.837764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.684 [2024-04-17 10:29:25.837781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.684 [2024-04-17 10:29:25.837793] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.684 [2024-04-17 10:29:25.837969] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.684 [2024-04-17 10:29:25.838122] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.684 [2024-04-17 10:29:25.838134] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.684 [2024-04-17 10:29:25.838144] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.684 [2024-04-17 10:29:25.841090] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.684 [2024-04-17 10:29:25.850205] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.684 [2024-04-17 10:29:25.850604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.684 [2024-04-17 10:29:25.850879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.684 [2024-04-17 10:29:25.850896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.684 [2024-04-17 10:29:25.850907] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.684 [2024-04-17 10:29:25.851106] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.684 [2024-04-17 10:29:25.851280] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.684 [2024-04-17 10:29:25.851293] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.684 [2024-04-17 10:29:25.851303] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.684 [2024-04-17 10:29:25.854005] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.684 [2024-04-17 10:29:25.863114] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.684 [2024-04-17 10:29:25.863634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.684 [2024-04-17 10:29:25.863844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.684 [2024-04-17 10:29:25.863875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.684 [2024-04-17 10:29:25.863897] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.684 [2024-04-17 10:29:25.864228] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.684 [2024-04-17 10:29:25.864563] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.684 [2024-04-17 10:29:25.864576] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.684 [2024-04-17 10:29:25.864585] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.684 [2024-04-17 10:29:25.867152] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.684 [2024-04-17 10:29:25.876192] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.684 [2024-04-17 10:29:25.876697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.684 [2024-04-17 10:29:25.876929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.684 [2024-04-17 10:29:25.876967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.684 [2024-04-17 10:29:25.876989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.684 [2024-04-17 10:29:25.877418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.684 [2024-04-17 10:29:25.877831] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.684 [2024-04-17 10:29:25.877844] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.684 [2024-04-17 10:29:25.877854] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.684 [2024-04-17 10:29:25.881478] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.684 [2024-04-17 10:29:25.889962] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.684 [2024-04-17 10:29:25.890425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.684 [2024-04-17 10:29:25.890735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.684 [2024-04-17 10:29:25.890768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.684 [2024-04-17 10:29:25.890789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.684 [2024-04-17 10:29:25.891033] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.684 [2024-04-17 10:29:25.891208] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.684 [2024-04-17 10:29:25.891221] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.684 [2024-04-17 10:29:25.891230] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.684 [2024-04-17 10:29:25.894175] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.684 [2024-04-17 10:29:25.902921] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.684 [2024-04-17 10:29:25.903421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.684 [2024-04-17 10:29:25.903697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.684 [2024-04-17 10:29:25.903729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.684 [2024-04-17 10:29:25.903751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.684 [2024-04-17 10:29:25.904082] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.684 [2024-04-17 10:29:25.904512] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.684 [2024-04-17 10:29:25.904537] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.684 [2024-04-17 10:29:25.904558] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.684 [2024-04-17 10:29:25.907394] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.684 [2024-04-17 10:29:25.915614] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.684 [2024-04-17 10:29:25.916123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.684 [2024-04-17 10:29:25.916384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.684 [2024-04-17 10:29:25.916415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.684 [2024-04-17 10:29:25.916444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.684 [2024-04-17 10:29:25.916738] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.684 [2024-04-17 10:29:25.916960] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.684 [2024-04-17 10:29:25.916972] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.684 [2024-04-17 10:29:25.916981] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.684 [2024-04-17 10:29:25.919632] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.684 [2024-04-17 10:29:25.928588] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.684 [2024-04-17 10:29:25.929073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.684 [2024-04-17 10:29:25.929325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.684 [2024-04-17 10:29:25.929357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.684 [2024-04-17 10:29:25.929379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.684 [2024-04-17 10:29:25.929667] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.684 [2024-04-17 10:29:25.929888] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.684 [2024-04-17 10:29:25.929901] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.685 [2024-04-17 10:29:25.929911] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.685 [2024-04-17 10:29:25.932606] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.685 [2024-04-17 10:29:25.941676] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.685 [2024-04-17 10:29:25.942029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.685 [2024-04-17 10:29:25.942172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.685 [2024-04-17 10:29:25.942187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.685 [2024-04-17 10:29:25.942198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.685 [2024-04-17 10:29:25.942327] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.685 [2024-04-17 10:29:25.942501] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.685 [2024-04-17 10:29:25.942513] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.685 [2024-04-17 10:29:25.942523] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.685 [2024-04-17 10:29:25.945052] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.685 [2024-04-17 10:29:25.954918] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.685 [2024-04-17 10:29:25.955343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.685 [2024-04-17 10:29:25.955625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.685 [2024-04-17 10:29:25.955672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.685 [2024-04-17 10:29:25.955695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.685 [2024-04-17 10:29:25.956082] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.685 [2024-04-17 10:29:25.956463] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.685 [2024-04-17 10:29:25.956487] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.685 [2024-04-17 10:29:25.956508] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.685 [2024-04-17 10:29:25.959698] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.685 [2024-04-17 10:29:25.967856] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.685 [2024-04-17 10:29:25.968371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.685 [2024-04-17 10:29:25.968562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.685 [2024-04-17 10:29:25.968594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.685 [2024-04-17 10:29:25.968615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.685 [2024-04-17 10:29:25.968991] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.685 [2024-04-17 10:29:25.969190] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.685 [2024-04-17 10:29:25.969202] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.685 [2024-04-17 10:29:25.969211] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.685 [2024-04-17 10:29:25.972000] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.685 [2024-04-17 10:29:25.980633] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.685 [2024-04-17 10:29:25.981128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.685 [2024-04-17 10:29:25.981413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.685 [2024-04-17 10:29:25.981445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.685 [2024-04-17 10:29:25.981466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.685 [2024-04-17 10:29:25.981752] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.685 [2024-04-17 10:29:25.981974] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.685 [2024-04-17 10:29:25.981986] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.685 [2024-04-17 10:29:25.981995] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.685 [2024-04-17 10:29:25.984809] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.685 [2024-04-17 10:29:25.993483] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.685 [2024-04-17 10:29:25.993976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.685 [2024-04-17 10:29:25.994291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.685 [2024-04-17 10:29:25.994322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.685 [2024-04-17 10:29:25.994344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.685 [2024-04-17 10:29:25.994788] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.685 [2024-04-17 10:29:25.995078] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.685 [2024-04-17 10:29:25.995103] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.685 [2024-04-17 10:29:25.995124] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.685 [2024-04-17 10:29:25.997973] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.685 [2024-04-17 10:29:26.006423] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.685 [2024-04-17 10:29:26.006879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.685 [2024-04-17 10:29:26.007068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.685 [2024-04-17 10:29:26.007099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.685 [2024-04-17 10:29:26.007122] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.685 [2024-04-17 10:29:26.007452] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.685 [2024-04-17 10:29:26.007747] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.685 [2024-04-17 10:29:26.007773] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.685 [2024-04-17 10:29:26.007794] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.685 [2024-04-17 10:29:26.010691] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.945 [2024-04-17 10:29:26.019550] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.945 [2024-04-17 10:29:26.020037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-04-17 10:29:26.020315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-04-17 10:29:26.020331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.945 [2024-04-17 10:29:26.020341] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.945 [2024-04-17 10:29:26.020559] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.945 [2024-04-17 10:29:26.020718] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.945 [2024-04-17 10:29:26.020732] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.945 [2024-04-17 10:29:26.020741] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.945 [2024-04-17 10:29:26.023547] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.945 [2024-04-17 10:29:26.032592] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.945 [2024-04-17 10:29:26.033112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-04-17 10:29:26.033346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.945 [2024-04-17 10:29:26.033377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.945 [2024-04-17 10:29:26.033398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.946 [2024-04-17 10:29:26.033840] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.946 [2024-04-17 10:29:26.034264] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.946 [2024-04-17 10:29:26.034280] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.946 [2024-04-17 10:29:26.034290] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.946 [2024-04-17 10:29:26.036921] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.946 [2024-04-17 10:29:26.045771] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.946 [2024-04-17 10:29:26.046231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-04-17 10:29:26.046433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-04-17 10:29:26.046463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.946 [2024-04-17 10:29:26.046486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.946 [2024-04-17 10:29:26.046782] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.946 [2024-04-17 10:29:26.047099] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.946 [2024-04-17 10:29:26.047112] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.946 [2024-04-17 10:29:26.047121] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.946 [2024-04-17 10:29:26.049845] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.946 [2024-04-17 10:29:26.059011] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.946 [2024-04-17 10:29:26.059395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-04-17 10:29:26.059589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-04-17 10:29:26.059605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.946 [2024-04-17 10:29:26.059616] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.946 [2024-04-17 10:29:26.059820] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.946 [2024-04-17 10:29:26.060018] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.946 [2024-04-17 10:29:26.060031] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.946 [2024-04-17 10:29:26.060040] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.946 [2024-04-17 10:29:26.062854] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.946 [2024-04-17 10:29:26.072080] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.946 [2024-04-17 10:29:26.072584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-04-17 10:29:26.072833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-04-17 10:29:26.072850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.946 [2024-04-17 10:29:26.072861] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.946 [2024-04-17 10:29:26.073058] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.946 [2024-04-17 10:29:26.073256] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.946 [2024-04-17 10:29:26.073269] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.946 [2024-04-17 10:29:26.073282] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.946 [2024-04-17 10:29:26.076165] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.946 [2024-04-17 10:29:26.084850] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.946 [2024-04-17 10:29:26.085338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-04-17 10:29:26.085605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-04-17 10:29:26.085636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.946 [2024-04-17 10:29:26.085673] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.946 [2024-04-17 10:29:26.085963] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.946 [2024-04-17 10:29:26.086092] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.946 [2024-04-17 10:29:26.086104] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.946 [2024-04-17 10:29:26.086114] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.946 [2024-04-17 10:29:26.088882] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.946 [2024-04-17 10:29:26.097944] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.946 [2024-04-17 10:29:26.098355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-04-17 10:29:26.098690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-04-17 10:29:26.098724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.946 [2024-04-17 10:29:26.098746] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.946 [2024-04-17 10:29:26.099077] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.946 [2024-04-17 10:29:26.099350] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.946 [2024-04-17 10:29:26.099362] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.946 [2024-04-17 10:29:26.099372] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.946 [2024-04-17 10:29:26.101872] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.946 [2024-04-17 10:29:26.110945] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.946 [2024-04-17 10:29:26.111311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-04-17 10:29:26.111547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-04-17 10:29:26.111578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.946 [2024-04-17 10:29:26.111600] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.946 [2024-04-17 10:29:26.111844] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.946 [2024-04-17 10:29:26.112229] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.946 [2024-04-17 10:29:26.112241] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.946 [2024-04-17 10:29:26.112251] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.946 [2024-04-17 10:29:26.114977] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.946 [2024-04-17 10:29:26.124025] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.946 [2024-04-17 10:29:26.124505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-04-17 10:29:26.124685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-04-17 10:29:26.124719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.946 [2024-04-17 10:29:26.124741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.946 [2024-04-17 10:29:26.125120] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.946 [2024-04-17 10:29:26.125475] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.946 [2024-04-17 10:29:26.125488] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.946 [2024-04-17 10:29:26.125498] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.946 [2024-04-17 10:29:26.128244] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.946 [2024-04-17 10:29:26.136985] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.946 [2024-04-17 10:29:26.137341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-04-17 10:29:26.137593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.946 [2024-04-17 10:29:26.137622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.946 [2024-04-17 10:29:26.137660] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.946 [2024-04-17 10:29:26.138041] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.946 [2024-04-17 10:29:26.138294] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.946 [2024-04-17 10:29:26.138307] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.946 [2024-04-17 10:29:26.138317] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.947 [2024-04-17 10:29:26.141038] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.947 [2024-04-17 10:29:26.149865] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.947 [2024-04-17 10:29:26.150294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-04-17 10:29:26.150610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-04-17 10:29:26.150641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.947 [2024-04-17 10:29:26.150683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.947 [2024-04-17 10:29:26.151113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.947 [2024-04-17 10:29:26.151341] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.947 [2024-04-17 10:29:26.151353] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.947 [2024-04-17 10:29:26.151363] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.947 [2024-04-17 10:29:26.155166] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.947 [2024-04-17 10:29:26.163465] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.947 [2024-04-17 10:29:26.164008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-04-17 10:29:26.164316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-04-17 10:29:26.164348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.947 [2024-04-17 10:29:26.164370] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.947 [2024-04-17 10:29:26.164707] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.947 [2024-04-17 10:29:26.164883] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.947 [2024-04-17 10:29:26.164895] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.947 [2024-04-17 10:29:26.164905] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.947 [2024-04-17 10:29:26.167762] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.947 [2024-04-17 10:29:26.176434] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.947 [2024-04-17 10:29:26.176825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-04-17 10:29:26.177107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-04-17 10:29:26.177138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.947 [2024-04-17 10:29:26.177161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.947 [2024-04-17 10:29:26.177632] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.947 [2024-04-17 10:29:26.177813] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.947 [2024-04-17 10:29:26.177826] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.947 [2024-04-17 10:29:26.177836] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.947 [2024-04-17 10:29:26.180554] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.947 [2024-04-17 10:29:26.189434] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.947 [2024-04-17 10:29:26.189837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-04-17 10:29:26.190030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-04-17 10:29:26.190046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.947 [2024-04-17 10:29:26.190056] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.947 [2024-04-17 10:29:26.190252] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.947 [2024-04-17 10:29:26.190450] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.947 [2024-04-17 10:29:26.190463] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.947 [2024-04-17 10:29:26.190472] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.947 [2024-04-17 10:29:26.193039] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.947 [2024-04-17 10:29:26.202220] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.947 [2024-04-17 10:29:26.202664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-04-17 10:29:26.202894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-04-17 10:29:26.202910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.947 [2024-04-17 10:29:26.202920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.947 [2024-04-17 10:29:26.203139] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.947 [2024-04-17 10:29:26.203292] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.947 [2024-04-17 10:29:26.203304] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.947 [2024-04-17 10:29:26.203314] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.947 [2024-04-17 10:29:26.205861] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.947 [2024-04-17 10:29:26.215094] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.947 [2024-04-17 10:29:26.215539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-04-17 10:29:26.215788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-04-17 10:29:26.215822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.947 [2024-04-17 10:29:26.215845] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.947 [2024-04-17 10:29:26.216226] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.947 [2024-04-17 10:29:26.216603] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.947 [2024-04-17 10:29:26.216616] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.947 [2024-04-17 10:29:26.216626] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.947 [2024-04-17 10:29:26.219326] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.947 [2024-04-17 10:29:26.228118] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.947 [2024-04-17 10:29:26.228661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-04-17 10:29:26.228999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-04-17 10:29:26.229030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.947 [2024-04-17 10:29:26.229053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.947 [2024-04-17 10:29:26.229432] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.947 [2024-04-17 10:29:26.229650] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.947 [2024-04-17 10:29:26.229664] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.947 [2024-04-17 10:29:26.229674] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.947 [2024-04-17 10:29:26.232573] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.947 [2024-04-17 10:29:26.241113] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.947 [2024-04-17 10:29:26.241624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-04-17 10:29:26.241952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-04-17 10:29:26.241991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.947 [2024-04-17 10:29:26.242014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.947 [2024-04-17 10:29:26.242395] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.947 [2024-04-17 10:29:26.242760] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.947 [2024-04-17 10:29:26.242774] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.947 [2024-04-17 10:29:26.242784] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.947 [2024-04-17 10:29:26.246592] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.947 [2024-04-17 10:29:26.254847] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.947 [2024-04-17 10:29:26.255361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-04-17 10:29:26.255689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.947 [2024-04-17 10:29:26.255723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.947 [2024-04-17 10:29:26.255745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.947 [2024-04-17 10:29:26.256275] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.948 [2024-04-17 10:29:26.256531] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.948 [2024-04-17 10:29:26.256544] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.948 [2024-04-17 10:29:26.256554] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.948 [2024-04-17 10:29:26.259342] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.948 [2024-04-17 10:29:26.268000] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.948 [2024-04-17 10:29:26.268446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-04-17 10:29:26.268785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.948 [2024-04-17 10:29:26.268818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:52.948 [2024-04-17 10:29:26.268840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:52.948 [2024-04-17 10:29:26.269070] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:52.948 [2024-04-17 10:29:26.269302] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.948 [2024-04-17 10:29:26.269326] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.948 [2024-04-17 10:29:26.269347] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.948 [2024-04-17 10:29:26.272138] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.207 [2024-04-17 10:29:26.281006] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.207 [2024-04-17 10:29:26.281549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.207 [2024-04-17 10:29:26.281799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.207 [2024-04-17 10:29:26.281815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.207 [2024-04-17 10:29:26.281830] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.207 [2024-04-17 10:29:26.282028] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.207 [2024-04-17 10:29:26.282203] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.207 [2024-04-17 10:29:26.282215] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.207 [2024-04-17 10:29:26.282224] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.207 [2024-04-17 10:29:26.284812] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.207 [2024-04-17 10:29:26.294177] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.207 [2024-04-17 10:29:26.294615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.207 [2024-04-17 10:29:26.294944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.207 [2024-04-17 10:29:26.294976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.207 [2024-04-17 10:29:26.294998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.207 [2024-04-17 10:29:26.295477] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.207 [2024-04-17 10:29:26.295687] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.207 [2024-04-17 10:29:26.295700] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.207 [2024-04-17 10:29:26.295710] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.207 [2024-04-17 10:29:26.298359] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.207 [2024-04-17 10:29:26.307182] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.207 [2024-04-17 10:29:26.307666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.207 [2024-04-17 10:29:26.307982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.207 [2024-04-17 10:29:26.308012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.207 [2024-04-17 10:29:26.308034] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.207 [2024-04-17 10:29:26.308365] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.207 [2024-04-17 10:29:26.308639] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.207 [2024-04-17 10:29:26.308660] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.207 [2024-04-17 10:29:26.308672] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.207 [2024-04-17 10:29:26.311323] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.207 [2024-04-17 10:29:26.320221] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.208 [2024-04-17 10:29:26.320725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.208 [2024-04-17 10:29:26.321018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.208 [2024-04-17 10:29:26.321049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.208 [2024-04-17 10:29:26.321071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.208 [2024-04-17 10:29:26.321358] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.208 [2024-04-17 10:29:26.321707] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.208 [2024-04-17 10:29:26.321720] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.208 [2024-04-17 10:29:26.321730] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.208 [2024-04-17 10:29:26.324560] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.208 [2024-04-17 10:29:26.332988] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.208 [2024-04-17 10:29:26.333476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.208 [2024-04-17 10:29:26.333760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.208 [2024-04-17 10:29:26.333793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.208 [2024-04-17 10:29:26.333816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.208 [2024-04-17 10:29:26.334005] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.208 [2024-04-17 10:29:26.334180] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.208 [2024-04-17 10:29:26.334192] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.208 [2024-04-17 10:29:26.334202] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.208 [2024-04-17 10:29:26.336988] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.208 [2024-04-17 10:29:26.346369] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.208 [2024-04-17 10:29:26.346885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.208 [2024-04-17 10:29:26.347201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.208 [2024-04-17 10:29:26.347233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.208 [2024-04-17 10:29:26.347256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.208 [2024-04-17 10:29:26.347586] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.208 [2024-04-17 10:29:26.347897] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.208 [2024-04-17 10:29:26.347910] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.208 [2024-04-17 10:29:26.347920] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.208 [2024-04-17 10:29:26.350667] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.208 [2024-04-17 10:29:26.359531] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.208 [2024-04-17 10:29:26.359851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.208 [2024-04-17 10:29:26.360101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.208 [2024-04-17 10:29:26.360138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.208 [2024-04-17 10:29:26.360160] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.208 [2024-04-17 10:29:26.360490] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.208 [2024-04-17 10:29:26.360853] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.208 [2024-04-17 10:29:26.360867] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.208 [2024-04-17 10:29:26.360877] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.208 [2024-04-17 10:29:26.363393] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.208 [2024-04-17 10:29:26.372416] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.208 [2024-04-17 10:29:26.372912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.208 [2024-04-17 10:29:26.373141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.208 [2024-04-17 10:29:26.373173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.208 [2024-04-17 10:29:26.373195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.208 [2024-04-17 10:29:26.373628] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.208 [2024-04-17 10:29:26.373844] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.208 [2024-04-17 10:29:26.373861] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.208 [2024-04-17 10:29:26.373874] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.208 [2024-04-17 10:29:26.377687] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.208 [2024-04-17 10:29:26.385773] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.208 [2024-04-17 10:29:26.386206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.208 [2024-04-17 10:29:26.386422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.208 [2024-04-17 10:29:26.386438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.208 [2024-04-17 10:29:26.386449] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.208 [2024-04-17 10:29:26.386655] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.208 [2024-04-17 10:29:26.386855] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.208 [2024-04-17 10:29:26.386867] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.208 [2024-04-17 10:29:26.386877] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.208 [2024-04-17 10:29:26.389504] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.208 [2024-04-17 10:29:26.398651] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.208 [2024-04-17 10:29:26.399095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.208 [2024-04-17 10:29:26.399319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.208 [2024-04-17 10:29:26.399350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.208 [2024-04-17 10:29:26.399371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.208 [2024-04-17 10:29:26.399815] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.208 [2024-04-17 10:29:26.400244] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.208 [2024-04-17 10:29:26.400261] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.208 [2024-04-17 10:29:26.400271] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.208 [2024-04-17 10:29:26.403062] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.208 [2024-04-17 10:29:26.411489] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.208 [2024-04-17 10:29:26.411821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.208 [2024-04-17 10:29:26.412095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.208 [2024-04-17 10:29:26.412111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.208 [2024-04-17 10:29:26.412121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.208 [2024-04-17 10:29:26.412273] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.208 [2024-04-17 10:29:26.412470] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.208 [2024-04-17 10:29:26.412483] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.208 [2024-04-17 10:29:26.412494] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.208 [2024-04-17 10:29:26.415104] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.208 [2024-04-17 10:29:26.424240] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.208 [2024-04-17 10:29:26.424686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.208 [2024-04-17 10:29:26.425299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.208 [2024-04-17 10:29:26.425324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.208 [2024-04-17 10:29:26.425336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.208 [2024-04-17 10:29:26.425516] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.208 [2024-04-17 10:29:26.425702] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.208 [2024-04-17 10:29:26.425717] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.208 [2024-04-17 10:29:26.425726] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.208 [2024-04-17 10:29:26.428489] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.208 [2024-04-17 10:29:26.437361] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.208 [2024-04-17 10:29:26.437658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.208 [2024-04-17 10:29:26.437796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.208 [2024-04-17 10:29:26.437812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.209 [2024-04-17 10:29:26.437822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.209 [2024-04-17 10:29:26.437997] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.209 [2024-04-17 10:29:26.438195] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.209 [2024-04-17 10:29:26.438208] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.209 [2024-04-17 10:29:26.438223] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.209 [2024-04-17 10:29:26.441282] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.209 [2024-04-17 10:29:26.450181] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.209 [2024-04-17 10:29:26.450610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.209 [2024-04-17 10:29:26.450798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.209 [2024-04-17 10:29:26.450815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.209 [2024-04-17 10:29:26.450825] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.209 [2024-04-17 10:29:26.451001] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.209 [2024-04-17 10:29:26.451200] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.209 [2024-04-17 10:29:26.451213] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.209 [2024-04-17 10:29:26.451223] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.209 [2024-04-17 10:29:26.453857] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.209 [2024-04-17 10:29:26.463129] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.209 [2024-04-17 10:29:26.463592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.209 [2024-04-17 10:29:26.463867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.209 [2024-04-17 10:29:26.463883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.209 [2024-04-17 10:29:26.463894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.209 [2024-04-17 10:29:26.463978] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.209 [2024-04-17 10:29:26.464129] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.209 [2024-04-17 10:29:26.464142] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.209 [2024-04-17 10:29:26.464151] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.209 [2024-04-17 10:29:26.467099] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.209 [2024-04-17 10:29:26.476084] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.209 [2024-04-17 10:29:26.476602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.209 [2024-04-17 10:29:26.476851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.209 [2024-04-17 10:29:26.476868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.209 [2024-04-17 10:29:26.476878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.209 [2024-04-17 10:29:26.477098] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.209 [2024-04-17 10:29:26.477250] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.209 [2024-04-17 10:29:26.477263] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.209 [2024-04-17 10:29:26.477273] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.209 [2024-04-17 10:29:26.480066] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.209 [2024-04-17 10:29:26.489131] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.209 [2024-04-17 10:29:26.489578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.209 [2024-04-17 10:29:26.489855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.209 [2024-04-17 10:29:26.489872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.209 [2024-04-17 10:29:26.489882] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.209 [2024-04-17 10:29:26.490057] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.209 [2024-04-17 10:29:26.490164] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.209 [2024-04-17 10:29:26.490176] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.209 [2024-04-17 10:29:26.490186] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.209 [2024-04-17 10:29:26.492937] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.209 [2024-04-17 10:29:26.502006] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.209 [2024-04-17 10:29:26.502486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.209 [2024-04-17 10:29:26.502736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.209 [2024-04-17 10:29:26.502753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.209 [2024-04-17 10:29:26.502764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.209 [2024-04-17 10:29:26.502983] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.209 [2024-04-17 10:29:26.503113] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.209 [2024-04-17 10:29:26.503126] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.209 [2024-04-17 10:29:26.503136] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.209 [2024-04-17 10:29:26.505860] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.209 [2024-04-17 10:29:26.514932] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.209 [2024-04-17 10:29:26.515393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.209 [2024-04-17 10:29:26.515649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.209 [2024-04-17 10:29:26.515666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.209 [2024-04-17 10:29:26.515676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.209 [2024-04-17 10:29:26.515873] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.209 [2024-04-17 10:29:26.516025] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.209 [2024-04-17 10:29:26.516037] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.209 [2024-04-17 10:29:26.516047] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.209 [2024-04-17 10:29:26.518677] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.209 [2024-04-17 10:29:26.527863] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.209 [2024-04-17 10:29:26.528307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.209 [2024-04-17 10:29:26.528526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.209 [2024-04-17 10:29:26.528541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.209 [2024-04-17 10:29:26.528552] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.209 [2024-04-17 10:29:26.528710] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.209 [2024-04-17 10:29:26.528862] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.209 [2024-04-17 10:29:26.528874] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.209 [2024-04-17 10:29:26.528884] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.209 [2024-04-17 10:29:26.531853] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.496 [2024-04-17 10:29:26.540952] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.496 [2024-04-17 10:29:26.541341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.496 [2024-04-17 10:29:26.541592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.496 [2024-04-17 10:29:26.541608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.496 [2024-04-17 10:29:26.541618] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.496 [2024-04-17 10:29:26.541778] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.496 [2024-04-17 10:29:26.541885] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.496 [2024-04-17 10:29:26.541897] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.496 [2024-04-17 10:29:26.541907] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.496 [2024-04-17 10:29:26.544902] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.496 [2024-04-17 10:29:26.553917] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.496 [2024-04-17 10:29:26.554401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.496 [2024-04-17 10:29:26.554659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.496 [2024-04-17 10:29:26.554677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.496 [2024-04-17 10:29:26.554688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.496 [2024-04-17 10:29:26.554819] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.496 [2024-04-17 10:29:26.555038] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.496 [2024-04-17 10:29:26.555051] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.496 [2024-04-17 10:29:26.555061] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.496 [2024-04-17 10:29:26.557905] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.496 [2024-04-17 10:29:26.566823] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.496 [2024-04-17 10:29:26.567326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.496 [2024-04-17 10:29:26.567454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.497 [2024-04-17 10:29:26.567471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.497 [2024-04-17 10:29:26.567482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.497 [2024-04-17 10:29:26.567634] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.497 [2024-04-17 10:29:26.567795] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.497 [2024-04-17 10:29:26.567809] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.497 [2024-04-17 10:29:26.567818] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.497 [2024-04-17 10:29:26.570543] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.497 [2024-04-17 10:29:26.579779] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.497 [2024-04-17 10:29:26.580141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.497 [2024-04-17 10:29:26.580428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.497 [2024-04-17 10:29:26.580460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.497 [2024-04-17 10:29:26.580482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.497 [2024-04-17 10:29:26.580737] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.497 [2024-04-17 10:29:26.580914] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.497 [2024-04-17 10:29:26.580926] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.497 [2024-04-17 10:29:26.580936] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.497 [2024-04-17 10:29:26.583633] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.497 [2024-04-17 10:29:26.592985] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.497 [2024-04-17 10:29:26.593418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.497 [2024-04-17 10:29:26.593638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.497 [2024-04-17 10:29:26.593684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.497 [2024-04-17 10:29:26.593706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.497 [2024-04-17 10:29:26.593985] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.497 [2024-04-17 10:29:26.594366] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.497 [2024-04-17 10:29:26.594391] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.497 [2024-04-17 10:29:26.594411] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.497 [2024-04-17 10:29:26.596988] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.497 [2024-04-17 10:29:26.605773] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.497 [2024-04-17 10:29:26.606057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.497 [2024-04-17 10:29:26.606188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.497 [2024-04-17 10:29:26.606207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.497 [2024-04-17 10:29:26.606218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.497 [2024-04-17 10:29:26.606415] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.497 [2024-04-17 10:29:26.606567] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.497 [2024-04-17 10:29:26.606579] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.497 [2024-04-17 10:29:26.606588] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.497 [2024-04-17 10:29:26.609269] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.497 [2024-04-17 10:29:26.618648] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.497 [2024-04-17 10:29:26.619026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.497 [2024-04-17 10:29:26.619338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.497 [2024-04-17 10:29:26.619370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.497 [2024-04-17 10:29:26.619392] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.497 [2024-04-17 10:29:26.619656] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.497 [2024-04-17 10:29:26.619810] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.497 [2024-04-17 10:29:26.619822] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.497 [2024-04-17 10:29:26.619832] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.497 [2024-04-17 10:29:26.622621] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.497 [2024-04-17 10:29:26.631770] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.497 [2024-04-17 10:29:26.633008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.497 [2024-04-17 10:29:26.633180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.497 [2024-04-17 10:29:26.633199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.497 [2024-04-17 10:29:26.633210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.497 [2024-04-17 10:29:26.633371] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.497 [2024-04-17 10:29:26.633504] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.497 [2024-04-17 10:29:26.633516] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.497 [2024-04-17 10:29:26.633526] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.497 [2024-04-17 10:29:26.636140] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.497 [2024-04-17 10:29:26.644886] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.497 [2024-04-17 10:29:26.645276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.497 [2024-04-17 10:29:26.645531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.497 [2024-04-17 10:29:26.645548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.497 [2024-04-17 10:29:26.645564] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.497 [2024-04-17 10:29:26.645745] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.497 [2024-04-17 10:29:26.645944] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.497 [2024-04-17 10:29:26.645956] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.497 [2024-04-17 10:29:26.645965] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.497 [2024-04-17 10:29:26.648825] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.497 [2024-04-17 10:29:26.658061] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.497 [2024-04-17 10:29:26.658446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.497 [2024-04-17 10:29:26.658703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.497 [2024-04-17 10:29:26.658738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.497 [2024-04-17 10:29:26.658760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.497 [2024-04-17 10:29:26.659091] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.497 [2024-04-17 10:29:26.659472] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.497 [2024-04-17 10:29:26.659497] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.497 [2024-04-17 10:29:26.659518] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.497 [2024-04-17 10:29:26.662318] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.497 [2024-04-17 10:29:26.671160] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.497 [2024-04-17 10:29:26.671579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.497 [2024-04-17 10:29:26.671803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.497 [2024-04-17 10:29:26.671835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.497 [2024-04-17 10:29:26.671858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.497 [2024-04-17 10:29:26.672187] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.497 [2024-04-17 10:29:26.672468] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.497 [2024-04-17 10:29:26.672493] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.497 [2024-04-17 10:29:26.672513] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.497 [2024-04-17 10:29:26.675347] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.497 [2024-04-17 10:29:26.684169] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.497 [2024-04-17 10:29:26.684480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.497 [2024-04-17 10:29:26.684682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.498 [2024-04-17 10:29:26.684698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.498 [2024-04-17 10:29:26.684709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.498 [2024-04-17 10:29:26.684842] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.498 [2024-04-17 10:29:26.684971] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.498 [2024-04-17 10:29:26.684983] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.498 [2024-04-17 10:29:26.684992] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.498 [2024-04-17 10:29:26.687735] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.498 [2024-04-17 10:29:26.697172] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.498 [2024-04-17 10:29:26.697535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.498 [2024-04-17 10:29:26.697740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.498 [2024-04-17 10:29:26.697773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.498 [2024-04-17 10:29:26.697795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.498 [2024-04-17 10:29:26.698125] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.498 [2024-04-17 10:29:26.698530] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.498 [2024-04-17 10:29:26.698542] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.498 [2024-04-17 10:29:26.698552] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.498 [2024-04-17 10:29:26.702545] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.498 [2024-04-17 10:29:26.710563] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.498 [2024-04-17 10:29:26.710936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.498 [2024-04-17 10:29:26.711075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.498 [2024-04-17 10:29:26.711091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.498 [2024-04-17 10:29:26.711101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.498 [2024-04-17 10:29:26.711299] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.498 [2024-04-17 10:29:26.711495] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.498 [2024-04-17 10:29:26.711508] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.498 [2024-04-17 10:29:26.711518] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.498 [2024-04-17 10:29:26.713993] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.498 [2024-04-17 10:29:26.723627] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.498 [2024-04-17 10:29:26.724076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.498 [2024-04-17 10:29:26.724232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.498 [2024-04-17 10:29:26.724263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.498 [2024-04-17 10:29:26.724286] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.498 [2024-04-17 10:29:26.724669] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.498 [2024-04-17 10:29:26.724896] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.498 [2024-04-17 10:29:26.724908] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.498 [2024-04-17 10:29:26.724918] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.498 [2024-04-17 10:29:26.727477] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.498 [2024-04-17 10:29:26.736639] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.498 [2024-04-17 10:29:26.737043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.498 [2024-04-17 10:29:26.737239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.498 [2024-04-17 10:29:26.737254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.498 [2024-04-17 10:29:26.737264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.498 [2024-04-17 10:29:26.737438] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.498 [2024-04-17 10:29:26.737636] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.498 [2024-04-17 10:29:26.737655] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.498 [2024-04-17 10:29:26.737666] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.498 [2024-04-17 10:29:26.740313] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.498 [2024-04-17 10:29:26.749576] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.498 [2024-04-17 10:29:26.750042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.498 [2024-04-17 10:29:26.750258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.498 [2024-04-17 10:29:26.750289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.498 [2024-04-17 10:29:26.750311] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.498 [2024-04-17 10:29:26.750638] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.498 [2024-04-17 10:29:26.750819] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.498 [2024-04-17 10:29:26.750832] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.498 [2024-04-17 10:29:26.750842] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.498 [2024-04-17 10:29:26.753582] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.498 [2024-04-17 10:29:26.762632] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.498 [2024-04-17 10:29:26.763015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.498 [2024-04-17 10:29:26.763249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.498 [2024-04-17 10:29:26.763280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.498 [2024-04-17 10:29:26.763302] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.498 [2024-04-17 10:29:26.763692] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.498 [2024-04-17 10:29:26.763882] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.498 [2024-04-17 10:29:26.763898] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.498 [2024-04-17 10:29:26.763909] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.498 [2024-04-17 10:29:26.766609] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.498 [2024-04-17 10:29:26.775684] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.498 [2024-04-17 10:29:26.776099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.498 [2024-04-17 10:29:26.776359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.498 [2024-04-17 10:29:26.776390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.498 [2024-04-17 10:29:26.776412] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.498 [2024-04-17 10:29:26.776704] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.498 [2024-04-17 10:29:26.777012] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.498 [2024-04-17 10:29:26.777024] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.498 [2024-04-17 10:29:26.777034] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.498 [2024-04-17 10:29:26.779911] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.498 [2024-04-17 10:29:26.788463] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.498 [2024-04-17 10:29:26.788813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.498 [2024-04-17 10:29:26.789887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.498 [2024-04-17 10:29:26.789917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.498 [2024-04-17 10:29:26.789930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.499 [2024-04-17 10:29:26.790113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.499 [2024-04-17 10:29:26.790221] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.499 [2024-04-17 10:29:26.790233] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.499 [2024-04-17 10:29:26.790243] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.499 [2024-04-17 10:29:26.792859] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.499 [2024-04-17 10:29:26.801319] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.499 [2024-04-17 10:29:26.801767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.499 [2024-04-17 10:29:26.801911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.499 [2024-04-17 10:29:26.801927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.499 [2024-04-17 10:29:26.801938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.499 [2024-04-17 10:29:26.802112] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.499 [2024-04-17 10:29:26.802242] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.499 [2024-04-17 10:29:26.802255] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.499 [2024-04-17 10:29:26.802268] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.499 [2024-04-17 10:29:26.805224] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.760 [2024-04-17 10:29:26.814329] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.760 [2024-04-17 10:29:26.814714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.760 [2024-04-17 10:29:26.814959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.760 [2024-04-17 10:29:26.814975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.760 [2024-04-17 10:29:26.814985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.760 [2024-04-17 10:29:26.815160] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.760 [2024-04-17 10:29:26.815336] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.760 [2024-04-17 10:29:26.815348] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.760 [2024-04-17 10:29:26.815358] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.760 [2024-04-17 10:29:26.818082] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.760 [2024-04-17 10:29:26.827250] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.760 [2024-04-17 10:29:26.827729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.760 [2024-04-17 10:29:26.828073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.760 [2024-04-17 10:29:26.828105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.760 [2024-04-17 10:29:26.828128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.760 [2024-04-17 10:29:26.828507] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.760 [2024-04-17 10:29:26.828803] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.760 [2024-04-17 10:29:26.828829] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.760 [2024-04-17 10:29:26.828849] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.760 [2024-04-17 10:29:26.831838] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.761 [2024-04-17 10:29:26.840072] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.761 [2024-04-17 10:29:26.840534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.761 [2024-04-17 10:29:26.840786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.761 [2024-04-17 10:29:26.840802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.761 [2024-04-17 10:29:26.840813] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.761 [2024-04-17 10:29:26.840987] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.761 [2024-04-17 10:29:26.841140] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.761 [2024-04-17 10:29:26.841152] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.761 [2024-04-17 10:29:26.841162] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.761 [2024-04-17 10:29:26.844031] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.761 [2024-04-17 10:29:26.853136] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.761 [2024-04-17 10:29:26.853449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.761 [2024-04-17 10:29:26.853657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.761 [2024-04-17 10:29:26.853674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.761 [2024-04-17 10:29:26.853684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.761 [2024-04-17 10:29:26.853814] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.761 [2024-04-17 10:29:26.853965] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.761 [2024-04-17 10:29:26.853978] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.761 [2024-04-17 10:29:26.853987] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.761 [2024-04-17 10:29:26.856616] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.761 [2024-04-17 10:29:26.866105] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.761 [2024-04-17 10:29:26.866611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.761 [2024-04-17 10:29:26.866826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.761 [2024-04-17 10:29:26.866858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.761 [2024-04-17 10:29:26.866881] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.761 [2024-04-17 10:29:26.867279] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.761 [2024-04-17 10:29:26.867432] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.761 [2024-04-17 10:29:26.867445] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.761 [2024-04-17 10:29:26.867454] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.761 [2024-04-17 10:29:26.870177] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.761 [2024-04-17 10:29:26.879181] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.761 [2024-04-17 10:29:26.879596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.761 [2024-04-17 10:29:26.879851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.761 [2024-04-17 10:29:26.879867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.761 [2024-04-17 10:29:26.879879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.761 [2024-04-17 10:29:26.880007] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.761 [2024-04-17 10:29:26.880159] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.761 [2024-04-17 10:29:26.880171] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.761 [2024-04-17 10:29:26.880181] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.761 [2024-04-17 10:29:26.882930] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.761 [2024-04-17 10:29:26.892173] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.761 [2024-04-17 10:29:26.892655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.761 [2024-04-17 10:29:26.892941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.761 [2024-04-17 10:29:26.892973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.761 [2024-04-17 10:29:26.892995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.761 [2024-04-17 10:29:26.893373] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.761 [2024-04-17 10:29:26.893618] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.761 [2024-04-17 10:29:26.893631] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.761 [2024-04-17 10:29:26.893640] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.761 [2024-04-17 10:29:26.896385] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.761 [2024-04-17 10:29:26.905313] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.761 [2024-04-17 10:29:26.905819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.761 [2024-04-17 10:29:26.906134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.761 [2024-04-17 10:29:26.906166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.761 [2024-04-17 10:29:26.906188] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.761 [2024-04-17 10:29:26.906383] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.761 [2024-04-17 10:29:26.906559] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.761 [2024-04-17 10:29:26.906572] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.761 [2024-04-17 10:29:26.906581] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.761 [2024-04-17 10:29:26.909172] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.761 [2024-04-17 10:29:26.918302] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.761 [2024-04-17 10:29:26.918761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.761 [2024-04-17 10:29:26.919019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.761 [2024-04-17 10:29:26.919051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.761 [2024-04-17 10:29:26.919073] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.761 [2024-04-17 10:29:26.919402] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.761 [2024-04-17 10:29:26.919848] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.761 [2024-04-17 10:29:26.919874] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.761 [2024-04-17 10:29:26.919896] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.761 [2024-04-17 10:29:26.922695] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.761 [2024-04-17 10:29:26.931433] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.761 [2024-04-17 10:29:26.931915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.761 [2024-04-17 10:29:26.932162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.761 [2024-04-17 10:29:26.932195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.761 [2024-04-17 10:29:26.932217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.761 [2024-04-17 10:29:26.932472] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.761 [2024-04-17 10:29:26.932625] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.761 [2024-04-17 10:29:26.932638] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.761 [2024-04-17 10:29:26.932653] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.761 [2024-04-17 10:29:26.935303] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.761 [2024-04-17 10:29:26.944435] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.761 [2024-04-17 10:29:26.944893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.761 [2024-04-17 10:29:26.945151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.761 [2024-04-17 10:29:26.945182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.761 [2024-04-17 10:29:26.945205] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.761 [2024-04-17 10:29:26.945745] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.761 [2024-04-17 10:29:26.946079] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.761 [2024-04-17 10:29:26.946103] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.761 [2024-04-17 10:29:26.946123] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.761 [2024-04-17 10:29:26.948785] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.761 [2024-04-17 10:29:26.957616] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.762 [2024-04-17 10:29:26.958047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.762 [2024-04-17 10:29:26.958295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.762 [2024-04-17 10:29:26.958311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.762 [2024-04-17 10:29:26.958321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.762 [2024-04-17 10:29:26.958428] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.762 [2024-04-17 10:29:26.958625] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.762 [2024-04-17 10:29:26.958637] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.762 [2024-04-17 10:29:26.958653] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.762 [2024-04-17 10:29:26.961482] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.762 [2024-04-17 10:29:26.970507] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.762 [2024-04-17 10:29:26.970957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.762 [2024-04-17 10:29:26.971238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.762 [2024-04-17 10:29:26.971277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.762 [2024-04-17 10:29:26.971299] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.762 [2024-04-17 10:29:26.971693] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.762 [2024-04-17 10:29:26.972027] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.762 [2024-04-17 10:29:26.972051] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.762 [2024-04-17 10:29:26.972071] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.762 [2024-04-17 10:29:26.976063] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.762 [2024-04-17 10:29:26.983827] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.762 [2024-04-17 10:29:26.984318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.762 [2024-04-17 10:29:26.984553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.762 [2024-04-17 10:29:26.984584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.762 [2024-04-17 10:29:26.984606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.762 [2024-04-17 10:29:26.985053] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.762 [2024-04-17 10:29:26.985391] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.762 [2024-04-17 10:29:26.985404] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.762 [2024-04-17 10:29:26.985413] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.762 [2024-04-17 10:29:26.988338] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.762 [2024-04-17 10:29:26.996852] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.762 [2024-04-17 10:29:26.997283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.762 [2024-04-17 10:29:26.997565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.762 [2024-04-17 10:29:26.997596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.762 [2024-04-17 10:29:26.997618] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.762 [2024-04-17 10:29:26.997862] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.762 [2024-04-17 10:29:26.998196] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.762 [2024-04-17 10:29:26.998220] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.762 [2024-04-17 10:29:26.998240] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.762 [2024-04-17 10:29:27.001024] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.762 [2024-04-17 10:29:27.009797] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.762 [2024-04-17 10:29:27.010204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.762 [2024-04-17 10:29:27.010357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.762 [2024-04-17 10:29:27.010373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.762 [2024-04-17 10:29:27.010387] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.762 [2024-04-17 10:29:27.010583] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.762 [2024-04-17 10:29:27.010720] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.762 [2024-04-17 10:29:27.010733] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.762 [2024-04-17 10:29:27.010742] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.762 [2024-04-17 10:29:27.013551] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.762 [2024-04-17 10:29:27.022892] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.762 [2024-04-17 10:29:27.023344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.762 [2024-04-17 10:29:27.023542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.762 [2024-04-17 10:29:27.023558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.762 [2024-04-17 10:29:27.023568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.762 [2024-04-17 10:29:27.023794] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.762 [2024-04-17 10:29:27.023925] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.762 [2024-04-17 10:29:27.023937] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.762 [2024-04-17 10:29:27.023947] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.762 [2024-04-17 10:29:27.026690] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.762 [2024-04-17 10:29:27.035996] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.762 [2024-04-17 10:29:27.036408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.762 [2024-04-17 10:29:27.036625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.762 [2024-04-17 10:29:27.036671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.762 [2024-04-17 10:29:27.036695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.762 [2024-04-17 10:29:27.037175] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.762 [2024-04-17 10:29:27.037407] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.762 [2024-04-17 10:29:27.037432] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.762 [2024-04-17 10:29:27.037456] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.762 [2024-04-17 10:29:27.040375] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.762 [2024-04-17 10:29:27.048866] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.762 [2024-04-17 10:29:27.049285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.762 [2024-04-17 10:29:27.049473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.762 [2024-04-17 10:29:27.049489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.762 [2024-04-17 10:29:27.049500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.762 [2024-04-17 10:29:27.049684] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.762 [2024-04-17 10:29:27.049905] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.762 [2024-04-17 10:29:27.049918] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.762 [2024-04-17 10:29:27.049928] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.762 [2024-04-17 10:29:27.052578] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.762 [2024-04-17 10:29:27.061715] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.762 [2024-04-17 10:29:27.062238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.762 [2024-04-17 10:29:27.062525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.762 [2024-04-17 10:29:27.062556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.762 [2024-04-17 10:29:27.062578] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.762 [2024-04-17 10:29:27.062876] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.763 [2024-04-17 10:29:27.063053] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.763 [2024-04-17 10:29:27.063065] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.763 [2024-04-17 10:29:27.063074] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.763 [2024-04-17 10:29:27.065863] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.763 [2024-04-17 10:29:27.074933] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.763 [2024-04-17 10:29:27.075415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.763 [2024-04-17 10:29:27.075662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.763 [2024-04-17 10:29:27.075695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.763 [2024-04-17 10:29:27.075717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.763 [2024-04-17 10:29:27.076097] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.763 [2024-04-17 10:29:27.076429] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.763 [2024-04-17 10:29:27.076454] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.763 [2024-04-17 10:29:27.076474] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.763 [2024-04-17 10:29:27.079636] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.763 [2024-04-17 10:29:27.087633] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:53.763 [2024-04-17 10:29:27.088092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.763 [2024-04-17 10:29:27.088369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.763 [2024-04-17 10:29:27.088384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:53.763 [2024-04-17 10:29:27.088395] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:53.763 [2024-04-17 10:29:27.088546] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:53.763 [2024-04-17 10:29:27.088707] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.763 [2024-04-17 10:29:27.088721] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.763 [2024-04-17 10:29:27.088731] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.023 [2024-04-17 10:29:27.091443] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.023 [2024-04-17 10:29:27.100774] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.023 [2024-04-17 10:29:27.101233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.023 [2024-04-17 10:29:27.101522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.023 [2024-04-17 10:29:27.101554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.023 [2024-04-17 10:29:27.101576] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.023 [2024-04-17 10:29:27.101896] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.023 [2024-04-17 10:29:27.102095] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.023 [2024-04-17 10:29:27.102107] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.023 [2024-04-17 10:29:27.102117] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.023 [2024-04-17 10:29:27.104925] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.023 [2024-04-17 10:29:27.113720] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.023 [2024-04-17 10:29:27.114184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.023 [2024-04-17 10:29:27.114415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.023 [2024-04-17 10:29:27.114446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.023 [2024-04-17 10:29:27.114468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.023 [2024-04-17 10:29:27.114805] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.023 [2024-04-17 10:29:27.114936] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.023 [2024-04-17 10:29:27.114948] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.023 [2024-04-17 10:29:27.114958] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.023 [2024-04-17 10:29:27.117631] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.023 [2024-04-17 10:29:27.126719] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.023 [2024-04-17 10:29:27.127196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.023 [2024-04-17 10:29:27.127382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.023 [2024-04-17 10:29:27.127414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.023 [2024-04-17 10:29:27.127438] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.023 [2024-04-17 10:29:27.127882] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.023 [2024-04-17 10:29:27.128088] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.023 [2024-04-17 10:29:27.128105] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.023 [2024-04-17 10:29:27.128115] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.023 [2024-04-17 10:29:27.131037] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.023 [2024-04-17 10:29:27.139520] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.023 [2024-04-17 10:29:27.139935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.023 [2024-04-17 10:29:27.140230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.023 [2024-04-17 10:29:27.140262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.024 [2024-04-17 10:29:27.140284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.024 [2024-04-17 10:29:27.140729] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.024 [2024-04-17 10:29:27.140940] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.024 [2024-04-17 10:29:27.140953] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.024 [2024-04-17 10:29:27.140962] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.024 [2024-04-17 10:29:27.143454] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.024 [2024-04-17 10:29:27.152486] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.024 [2024-04-17 10:29:27.152905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.024 [2024-04-17 10:29:27.153141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.024 [2024-04-17 10:29:27.153173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.024 [2024-04-17 10:29:27.153195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.024 [2024-04-17 10:29:27.153573] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.024 [2024-04-17 10:29:27.153796] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.024 [2024-04-17 10:29:27.153809] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.024 [2024-04-17 10:29:27.153819] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.024 [2024-04-17 10:29:27.156785] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.024 [2024-04-17 10:29:27.165502] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.024 [2024-04-17 10:29:27.166055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.024 [2024-04-17 10:29:27.166264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.024 [2024-04-17 10:29:27.166295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.024 [2024-04-17 10:29:27.166317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.024 [2024-04-17 10:29:27.166658] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.024 [2024-04-17 10:29:27.166888] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.024 [2024-04-17 10:29:27.166901] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.024 [2024-04-17 10:29:27.166914] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.024 [2024-04-17 10:29:27.169883] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.024 [2024-04-17 10:29:27.178066] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.024 [2024-04-17 10:29:27.178489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.024 [2024-04-17 10:29:27.178703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.024 [2024-04-17 10:29:27.178737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.024 [2024-04-17 10:29:27.178758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.024 [2024-04-17 10:29:27.179236] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.024 [2024-04-17 10:29:27.179519] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.024 [2024-04-17 10:29:27.179544] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.024 [2024-04-17 10:29:27.179564] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.024 [2024-04-17 10:29:27.182278] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.024 [2024-04-17 10:29:27.191098] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.024 [2024-04-17 10:29:27.191620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.024 [2024-04-17 10:29:27.191945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.024 [2024-04-17 10:29:27.191983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.024 [2024-04-17 10:29:27.191994] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.024 [2024-04-17 10:29:27.192214] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.024 [2024-04-17 10:29:27.192412] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.024 [2024-04-17 10:29:27.192425] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.024 [2024-04-17 10:29:27.192435] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.024 [2024-04-17 10:29:27.195071] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.024 [2024-04-17 10:29:27.204058] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.024 [2024-04-17 10:29:27.204539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.024 [2024-04-17 10:29:27.204722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.024 [2024-04-17 10:29:27.204740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.024 [2024-04-17 10:29:27.204750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.024 [2024-04-17 10:29:27.204902] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.024 [2024-04-17 10:29:27.205075] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.024 [2024-04-17 10:29:27.205087] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.024 [2024-04-17 10:29:27.205097] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.024 [2024-04-17 10:29:27.207845] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.024 [2024-04-17 10:29:27.216932] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.024 [2024-04-17 10:29:27.217295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.024 [2024-04-17 10:29:27.217578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.024 [2024-04-17 10:29:27.217610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.024 [2024-04-17 10:29:27.217632] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.024 [2024-04-17 10:29:27.217921] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.024 [2024-04-17 10:29:27.218074] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.024 [2024-04-17 10:29:27.218086] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.024 [2024-04-17 10:29:27.218096] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.024 [2024-04-17 10:29:27.220732] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.024 [2024-04-17 10:29:27.229766] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.024 [2024-04-17 10:29:27.230182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.024 [2024-04-17 10:29:27.230493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.024 [2024-04-17 10:29:27.230524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.024 [2024-04-17 10:29:27.230547] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.024 [2024-04-17 10:29:27.231039] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.024 [2024-04-17 10:29:27.231261] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.024 [2024-04-17 10:29:27.231274] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.024 [2024-04-17 10:29:27.231283] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.024 [2024-04-17 10:29:27.233914] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.024 [2024-04-17 10:29:27.242704] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.024 [2024-04-17 10:29:27.243189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.024 [2024-04-17 10:29:27.243500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.024 [2024-04-17 10:29:27.243531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.024 [2024-04-17 10:29:27.243553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.024 [2024-04-17 10:29:27.244009] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.024 [2024-04-17 10:29:27.244351] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.024 [2024-04-17 10:29:27.244363] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.024 [2024-04-17 10:29:27.244373] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.024 [2024-04-17 10:29:27.247028] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.024 [2024-04-17 10:29:27.255736] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.024 [2024-04-17 10:29:27.256219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.024 [2024-04-17 10:29:27.256458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.024 [2024-04-17 10:29:27.256488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.024 [2024-04-17 10:29:27.256510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.024 [2024-04-17 10:29:27.256756] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.024 [2024-04-17 10:29:27.257067] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.025 [2024-04-17 10:29:27.257079] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.025 [2024-04-17 10:29:27.257089] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.025 [2024-04-17 10:29:27.259966] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.025 [2024-04-17 10:29:27.268530] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.025 [2024-04-17 10:29:27.269017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.025 [2024-04-17 10:29:27.269268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.025 [2024-04-17 10:29:27.269297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.025 [2024-04-17 10:29:27.269321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.025 [2024-04-17 10:29:27.269600] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.025 [2024-04-17 10:29:27.269985] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.025 [2024-04-17 10:29:27.269998] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.025 [2024-04-17 10:29:27.270008] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.025 [2024-04-17 10:29:27.272550] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.025 [2024-04-17 10:29:27.281599] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.025 [2024-04-17 10:29:27.282062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.025 [2024-04-17 10:29:27.282308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.025 [2024-04-17 10:29:27.282339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.025 [2024-04-17 10:29:27.282361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.025 [2024-04-17 10:29:27.282754] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.025 [2024-04-17 10:29:27.283084] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.025 [2024-04-17 10:29:27.283097] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.025 [2024-04-17 10:29:27.283109] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.025 [2024-04-17 10:29:27.285969] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.025 [2024-04-17 10:29:27.294289] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.025 [2024-04-17 10:29:27.294766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.025 [2024-04-17 10:29:27.294990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.025 [2024-04-17 10:29:27.295006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.025 [2024-04-17 10:29:27.295017] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.025 [2024-04-17 10:29:27.295191] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.025 [2024-04-17 10:29:27.295344] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.025 [2024-04-17 10:29:27.295357] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.025 [2024-04-17 10:29:27.295367] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.025 [2024-04-17 10:29:27.298311] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.025 [2024-04-17 10:29:27.307083] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.025 [2024-04-17 10:29:27.307568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.025 [2024-04-17 10:29:27.307805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.025 [2024-04-17 10:29:27.307839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.025 [2024-04-17 10:29:27.307861] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.025 [2024-04-17 10:29:27.308162] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.025 [2024-04-17 10:29:27.308337] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.025 [2024-04-17 10:29:27.308350] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.025 [2024-04-17 10:29:27.308360] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.025 [2024-04-17 10:29:27.311015] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.025 [2024-04-17 10:29:27.320052] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.025 [2024-04-17 10:29:27.320539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.025 [2024-04-17 10:29:27.320799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.025 [2024-04-17 10:29:27.320832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.025 [2024-04-17 10:29:27.320854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.025 [2024-04-17 10:29:27.321114] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.025 [2024-04-17 10:29:27.321314] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.025 [2024-04-17 10:29:27.321326] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.025 [2024-04-17 10:29:27.321336] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.025 [2024-04-17 10:29:27.323920] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.025 [2024-04-17 10:29:27.333122] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.025 [2024-04-17 10:29:27.333561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.025 [2024-04-17 10:29:27.333833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.025 [2024-04-17 10:29:27.333873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.025 [2024-04-17 10:29:27.333896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.025 [2024-04-17 10:29:27.334128] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.025 [2024-04-17 10:29:27.334304] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.025 [2024-04-17 10:29:27.334317] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.025 [2024-04-17 10:29:27.334327] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.025 [2024-04-17 10:29:27.337090] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.025 [2024-04-17 10:29:27.346315] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.025 [2024-04-17 10:29:27.346795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.025 [2024-04-17 10:29:27.347031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.025 [2024-04-17 10:29:27.347063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.025 [2024-04-17 10:29:27.347086] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.025 [2024-04-17 10:29:27.347368] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.025 [2024-04-17 10:29:27.347589] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.025 [2024-04-17 10:29:27.347601] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.025 [2024-04-17 10:29:27.347611] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.025 [2024-04-17 10:29:27.350406] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.286 [2024-04-17 10:29:27.359108] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.286 [2024-04-17 10:29:27.359611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.286 [2024-04-17 10:29:27.359801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.286 [2024-04-17 10:29:27.359816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.286 [2024-04-17 10:29:27.359827] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.286 [2024-04-17 10:29:27.360046] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.286 [2024-04-17 10:29:27.360244] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.286 [2024-04-17 10:29:27.360257] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.286 [2024-04-17 10:29:27.360267] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.286 [2024-04-17 10:29:27.362940] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.286 [2024-04-17 10:29:27.371825] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.286 [2024-04-17 10:29:27.372338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.286 [2024-04-17 10:29:27.372622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.286 [2024-04-17 10:29:27.372669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.286 [2024-04-17 10:29:27.372705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.286 [2024-04-17 10:29:27.373183] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.286 [2024-04-17 10:29:27.373461] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.286 [2024-04-17 10:29:27.373474] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.286 [2024-04-17 10:29:27.373484] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.286 [2024-04-17 10:29:27.376227] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.286 [2024-04-17 10:29:27.384856] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.286 [2024-04-17 10:29:27.385391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.286 [2024-04-17 10:29:27.385571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.286 [2024-04-17 10:29:27.385601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.286 [2024-04-17 10:29:27.385622] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.286 [2024-04-17 10:29:27.386070] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.286 [2024-04-17 10:29:27.386510] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.286 [2024-04-17 10:29:27.386523] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.286 [2024-04-17 10:29:27.386533] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.286 [2024-04-17 10:29:27.389276] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.286 [2024-04-17 10:29:27.397776] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.286 [2024-04-17 10:29:27.398287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.286 [2024-04-17 10:29:27.398584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.286 [2024-04-17 10:29:27.398615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.286 [2024-04-17 10:29:27.398637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.286 [2024-04-17 10:29:27.398981] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.286 [2024-04-17 10:29:27.399254] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.286 [2024-04-17 10:29:27.399266] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.286 [2024-04-17 10:29:27.399276] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.286 [2024-04-17 10:29:27.402336] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.286 [2024-04-17 10:29:27.410759] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.286 [2024-04-17 10:29:27.411221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.287 [2024-04-17 10:29:27.411549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.287 [2024-04-17 10:29:27.411580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.287 [2024-04-17 10:29:27.411602] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.287 [2024-04-17 10:29:27.411905] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.287 [2024-04-17 10:29:27.412290] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.287 [2024-04-17 10:29:27.412315] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.287 [2024-04-17 10:29:27.412336] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.287 [2024-04-17 10:29:27.415479] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.287 [2024-04-17 10:29:27.423543] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.287 [2024-04-17 10:29:27.424020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.287 [2024-04-17 10:29:27.424332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.287 [2024-04-17 10:29:27.424363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.287 [2024-04-17 10:29:27.424385] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.287 [2024-04-17 10:29:27.424658] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.287 [2024-04-17 10:29:27.424812] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.287 [2024-04-17 10:29:27.424825] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.287 [2024-04-17 10:29:27.424835] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.287 [2024-04-17 10:29:27.427754] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.287 [2024-04-17 10:29:27.436402] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.287 [2024-04-17 10:29:27.436933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.287 [2024-04-17 10:29:27.437219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.287 [2024-04-17 10:29:27.437251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.287 [2024-04-17 10:29:27.437272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.287 [2024-04-17 10:29:27.437496] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.287 [2024-04-17 10:29:27.437655] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.287 [2024-04-17 10:29:27.437668] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.287 [2024-04-17 10:29:27.437678] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.287 [2024-04-17 10:29:27.441622] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.287 [2024-04-17 10:29:27.449969] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.287 [2024-04-17 10:29:27.450404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.287 [2024-04-17 10:29:27.450732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.287 [2024-04-17 10:29:27.450765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.287 [2024-04-17 10:29:27.450787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.287 [2024-04-17 10:29:27.451265] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.287 [2024-04-17 10:29:27.451640] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.287 [2024-04-17 10:29:27.451661] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.287 [2024-04-17 10:29:27.451671] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.287 [2024-04-17 10:29:27.454253] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.287 [2024-04-17 10:29:27.463087] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.287 [2024-04-17 10:29:27.463619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.287 [2024-04-17 10:29:27.463922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.287 [2024-04-17 10:29:27.463955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.287 [2024-04-17 10:29:27.463977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.287 [2024-04-17 10:29:27.464356] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.287 [2024-04-17 10:29:27.464719] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.287 [2024-04-17 10:29:27.464733] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.287 [2024-04-17 10:29:27.464743] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.287 [2024-04-17 10:29:27.467550] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.287 [2024-04-17 10:29:27.475967] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.287 [2024-04-17 10:29:27.476395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.287 [2024-04-17 10:29:27.476597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.287 [2024-04-17 10:29:27.476613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.287 [2024-04-17 10:29:27.476623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.287 [2024-04-17 10:29:27.476738] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.287 [2024-04-17 10:29:27.476961] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.287 [2024-04-17 10:29:27.476973] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.287 [2024-04-17 10:29:27.476983] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.287 [2024-04-17 10:29:27.479821] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.287 [2024-04-17 10:29:27.488991] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.287 [2024-04-17 10:29:27.489430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.287 [2024-04-17 10:29:27.489669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.287 [2024-04-17 10:29:27.489702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.287 [2024-04-17 10:29:27.489724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.287 [2024-04-17 10:29:27.489953] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.287 [2024-04-17 10:29:27.490152] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.287 [2024-04-17 10:29:27.490169] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.287 [2024-04-17 10:29:27.490179] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.287 [2024-04-17 10:29:27.493084] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.287 [2024-04-17 10:29:27.501817] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.287 [2024-04-17 10:29:27.502241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.287 [2024-04-17 10:29:27.502465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.287 [2024-04-17 10:29:27.502481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.287 [2024-04-17 10:29:27.502491] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.287 [2024-04-17 10:29:27.502718] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.288 [2024-04-17 10:29:27.502895] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.288 [2024-04-17 10:29:27.502908] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.288 [2024-04-17 10:29:27.502918] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.288 [2024-04-17 10:29:27.505732] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.288 [2024-04-17 10:29:27.514841] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.288 [2024-04-17 10:29:27.515370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.288 [2024-04-17 10:29:27.515622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.288 [2024-04-17 10:29:27.515670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.288 [2024-04-17 10:29:27.515694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.288 [2024-04-17 10:29:27.515964] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.288 [2024-04-17 10:29:27.516117] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.288 [2024-04-17 10:29:27.516130] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.288 [2024-04-17 10:29:27.516140] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.288 [2024-04-17 10:29:27.518752] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.288 [2024-04-17 10:29:27.527908] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.288 [2024-04-17 10:29:27.528298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.288 [2024-04-17 10:29:27.528494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.288 [2024-04-17 10:29:27.528510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.288 [2024-04-17 10:29:27.528521] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.288 [2024-04-17 10:29:27.528725] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.288 [2024-04-17 10:29:27.528925] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.288 [2024-04-17 10:29:27.528937] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.288 [2024-04-17 10:29:27.528951] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.288 [2024-04-17 10:29:27.531587] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.288 [2024-04-17 10:29:27.540748] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.288 [2024-04-17 10:29:27.541241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.288 [2024-04-17 10:29:27.541597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.288 [2024-04-17 10:29:27.541628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.288 [2024-04-17 10:29:27.541662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.288 [2024-04-17 10:29:27.541911] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.288 [2024-04-17 10:29:27.542132] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.288 [2024-04-17 10:29:27.542145] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.288 [2024-04-17 10:29:27.542154] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.288 [2024-04-17 10:29:27.544814] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.288 [2024-04-17 10:29:27.553559] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.288 [2024-04-17 10:29:27.553990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.288 [2024-04-17 10:29:27.554306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.288 [2024-04-17 10:29:27.554337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.288 [2024-04-17 10:29:27.554359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.288 [2024-04-17 10:29:27.554702] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.288 [2024-04-17 10:29:27.554972] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.288 [2024-04-17 10:29:27.554984] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.288 [2024-04-17 10:29:27.554994] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.288 [2024-04-17 10:29:27.557740] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.288 [2024-04-17 10:29:27.566696] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.288 [2024-04-17 10:29:27.567158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.288 [2024-04-17 10:29:27.567488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.288 [2024-04-17 10:29:27.567519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.288 [2024-04-17 10:29:27.567541] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.288 [2024-04-17 10:29:27.567855] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.288 [2024-04-17 10:29:27.568054] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.288 [2024-04-17 10:29:27.568067] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.288 [2024-04-17 10:29:27.568077] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.288 [2024-04-17 10:29:27.570711] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.288 [2024-04-17 10:29:27.579706] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.288 [2024-04-17 10:29:27.580228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.288 [2024-04-17 10:29:27.580511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.288 [2024-04-17 10:29:27.580542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.288 [2024-04-17 10:29:27.580564] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.288 [2024-04-17 10:29:27.580950] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.288 [2024-04-17 10:29:27.581150] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.288 [2024-04-17 10:29:27.581162] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.288 [2024-04-17 10:29:27.581172] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.288 [2024-04-17 10:29:27.583869] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.288 [2024-04-17 10:29:27.592892] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.288 [2024-04-17 10:29:27.593370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.288 [2024-04-17 10:29:27.593598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.288 [2024-04-17 10:29:27.593613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.288 [2024-04-17 10:29:27.593623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.288 [2024-04-17 10:29:27.593760] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.288 [2024-04-17 10:29:27.593913] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.288 [2024-04-17 10:29:27.593925] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.288 [2024-04-17 10:29:27.593934] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.288 [2024-04-17 10:29:27.596744] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.288 [2024-04-17 10:29:27.605759] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.288 [2024-04-17 10:29:27.606245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.288 [2024-04-17 10:29:27.606503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.288 [2024-04-17 10:29:27.606533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.288 [2024-04-17 10:29:27.606556] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.288 [2024-04-17 10:29:27.606892] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.288 [2024-04-17 10:29:27.606978] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.288 [2024-04-17 10:29:27.606991] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.288 [2024-04-17 10:29:27.607001] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.288 [2024-04-17 10:29:27.609494] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.548 [2024-04-17 10:29:27.618591] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.548 [2024-04-17 10:29:27.619145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.548 [2024-04-17 10:29:27.619458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.548 [2024-04-17 10:29:27.619490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.548 [2024-04-17 10:29:27.619513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.548 [2024-04-17 10:29:27.620055] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.548 [2024-04-17 10:29:27.620277] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.549 [2024-04-17 10:29:27.620290] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.549 [2024-04-17 10:29:27.620299] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.549 [2024-04-17 10:29:27.623020] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.549 [2024-04-17 10:29:27.631652] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.549 [2024-04-17 10:29:27.632143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.632454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.632485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.549 [2024-04-17 10:29:27.632508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.549 [2024-04-17 10:29:27.632898] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.549 [2024-04-17 10:29:27.633131] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.549 [2024-04-17 10:29:27.633144] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.549 [2024-04-17 10:29:27.633153] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.549 [2024-04-17 10:29:27.635941] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.549 [2024-04-17 10:29:27.644800] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.549 [2024-04-17 10:29:27.645173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.645484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.645515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.549 [2024-04-17 10:29:27.645537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.549 [2024-04-17 10:29:27.645880] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.549 [2024-04-17 10:29:27.646167] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.549 [2024-04-17 10:29:27.646180] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.549 [2024-04-17 10:29:27.646189] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.549 [2024-04-17 10:29:27.649044] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.549 [2024-04-17 10:29:27.657620] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.549 [2024-04-17 10:29:27.658062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.658398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.658429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.549 [2024-04-17 10:29:27.658450] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.549 [2024-04-17 10:29:27.658842] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.549 [2024-04-17 10:29:27.659044] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.549 [2024-04-17 10:29:27.659055] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.549 [2024-04-17 10:29:27.659065] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.549 [2024-04-17 10:29:27.661648] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.549 [2024-04-17 10:29:27.670619] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.549 [2024-04-17 10:29:27.671137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.671409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.671441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.549 [2024-04-17 10:29:27.671463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.549 [2024-04-17 10:29:27.672004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.549 [2024-04-17 10:29:27.672158] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.549 [2024-04-17 10:29:27.672171] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.549 [2024-04-17 10:29:27.672181] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.549 [2024-04-17 10:29:27.674928] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.549 [2024-04-17 10:29:27.683606] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.549 [2024-04-17 10:29:27.684112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.684410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.684441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.549 [2024-04-17 10:29:27.684463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.549 [2024-04-17 10:29:27.684756] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.549 [2024-04-17 10:29:27.684967] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.549 [2024-04-17 10:29:27.684979] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.549 [2024-04-17 10:29:27.684989] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.549 [2024-04-17 10:29:27.687667] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.549 [2024-04-17 10:29:27.696543] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.549 [2024-04-17 10:29:27.696976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.697170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.697190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.549 [2024-04-17 10:29:27.697200] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.549 [2024-04-17 10:29:27.697398] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.549 [2024-04-17 10:29:27.697550] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.549 [2024-04-17 10:29:27.697563] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.549 [2024-04-17 10:29:27.697573] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.549 [2024-04-17 10:29:27.700500] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.549 [2024-04-17 10:29:27.709519] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.549 [2024-04-17 10:29:27.710056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.710373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.710405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.549 [2024-04-17 10:29:27.710426] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.549 [2024-04-17 10:29:27.710820] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.549 [2024-04-17 10:29:27.711014] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.549 [2024-04-17 10:29:27.711026] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.549 [2024-04-17 10:29:27.711036] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.549 [2024-04-17 10:29:27.713754] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.549 [2024-04-17 10:29:27.722859] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.549 [2024-04-17 10:29:27.723236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.723417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.723433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.549 [2024-04-17 10:29:27.723443] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.549 [2024-04-17 10:29:27.723549] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.549 [2024-04-17 10:29:27.723777] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.549 [2024-04-17 10:29:27.723790] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.549 [2024-04-17 10:29:27.723799] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.549 [2024-04-17 10:29:27.726559] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.549 [2024-04-17 10:29:27.735559] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.549 [2024-04-17 10:29:27.735959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.736207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.736223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.549 [2024-04-17 10:29:27.736237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.549 [2024-04-17 10:29:27.736433] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.549 [2024-04-17 10:29:27.736585] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.549 [2024-04-17 10:29:27.736598] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.549 [2024-04-17 10:29:27.736607] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.549 [2024-04-17 10:29:27.739465] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.549 [2024-04-17 10:29:27.748560] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.549 [2024-04-17 10:29:27.749068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.749396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.749428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.549 [2024-04-17 10:29:27.749450] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.549 [2024-04-17 10:29:27.749742] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.549 [2024-04-17 10:29:27.750027] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.549 [2024-04-17 10:29:27.750051] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.549 [2024-04-17 10:29:27.750072] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.549 [2024-04-17 10:29:27.752828] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.549 [2024-04-17 10:29:27.761462] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.549 [2024-04-17 10:29:27.761952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.762282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.762313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.549 [2024-04-17 10:29:27.762339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.549 [2024-04-17 10:29:27.762512] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.549 [2024-04-17 10:29:27.762719] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.549 [2024-04-17 10:29:27.762733] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.549 [2024-04-17 10:29:27.762742] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.549 [2024-04-17 10:29:27.765257] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.549 [2024-04-17 10:29:27.774247] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.549 [2024-04-17 10:29:27.774734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.774963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.774994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.549 [2024-04-17 10:29:27.775017] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.549 [2024-04-17 10:29:27.775393] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.549 [2024-04-17 10:29:27.775545] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.549 [2024-04-17 10:29:27.775558] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.549 [2024-04-17 10:29:27.775568] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.549 [2024-04-17 10:29:27.778227] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.549 [2024-04-17 10:29:27.787461] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.549 [2024-04-17 10:29:27.787935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.788266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.788297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.549 [2024-04-17 10:29:27.788319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.549 [2024-04-17 10:29:27.788573] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.549 [2024-04-17 10:29:27.788735] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.549 [2024-04-17 10:29:27.788748] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.549 [2024-04-17 10:29:27.788758] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.549 [2024-04-17 10:29:27.791449] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.549 [2024-04-17 10:29:27.800534] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.549 [2024-04-17 10:29:27.801095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.801437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.801468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.549 [2024-04-17 10:29:27.801490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.549 [2024-04-17 10:29:27.801824] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.549 [2024-04-17 10:29:27.802080] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.549 [2024-04-17 10:29:27.802097] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.549 [2024-04-17 10:29:27.802111] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.549 [2024-04-17 10:29:27.806337] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.549 [2024-04-17 10:29:27.813717] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.549 [2024-04-17 10:29:27.814213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.814415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.814447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.549 [2024-04-17 10:29:27.814469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.549 [2024-04-17 10:29:27.815011] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.549 [2024-04-17 10:29:27.815401] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.549 [2024-04-17 10:29:27.815426] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.549 [2024-04-17 10:29:27.815447] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.549 [2024-04-17 10:29:27.818253] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.549 [2024-04-17 10:29:27.826631] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.549 [2024-04-17 10:29:27.827132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.827461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.827491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.549 [2024-04-17 10:29:27.827513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.549 [2024-04-17 10:29:27.827785] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.549 [2024-04-17 10:29:27.827961] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.549 [2024-04-17 10:29:27.827974] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.549 [2024-04-17 10:29:27.827983] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.549 [2024-04-17 10:29:27.830908] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.549 [2024-04-17 10:29:27.839490] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.549 [2024-04-17 10:29:27.839950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.840236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.549 [2024-04-17 10:29:27.840268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.549 [2024-04-17 10:29:27.840290] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.549 [2024-04-17 10:29:27.840768] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.549 [2024-04-17 10:29:27.840969] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.549 [2024-04-17 10:29:27.840981] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.549 [2024-04-17 10:29:27.840991] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.549 [2024-04-17 10:29:27.843696] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.549 [2024-04-17 10:29:27.852453] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.549 [2024-04-17 10:29:27.852984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.550 [2024-04-17 10:29:27.853268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.550 [2024-04-17 10:29:27.853300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.550 [2024-04-17 10:29:27.853322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.550 [2024-04-17 10:29:27.853716] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.550 [2024-04-17 10:29:27.854101] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.550 [2024-04-17 10:29:27.854134] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.550 [2024-04-17 10:29:27.854155] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.550 [2024-04-17 10:29:27.856935] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.550 [2024-04-17 10:29:27.865721] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.550 [2024-04-17 10:29:27.866198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.550 [2024-04-17 10:29:27.866437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.550 [2024-04-17 10:29:27.866466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.550 [2024-04-17 10:29:27.866488] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.550 [2024-04-17 10:29:27.866832] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.550 [2024-04-17 10:29:27.867217] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.550 [2024-04-17 10:29:27.867241] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.550 [2024-04-17 10:29:27.867262] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.550 [2024-04-17 10:29:27.870179] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.550 [2024-04-17 10:29:27.878655] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.550 [2024-04-17 10:29:27.879001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.550 [2024-04-17 10:29:27.879180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.550 [2024-04-17 10:29:27.879195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.550 [2024-04-17 10:29:27.879207] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.550 [2024-04-17 10:29:27.879404] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.810 [2024-04-17 10:29:27.879579] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.810 [2024-04-17 10:29:27.879592] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.810 [2024-04-17 10:29:27.879601] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.810 [2024-04-17 10:29:27.882459] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.810 [2024-04-17 10:29:27.891312] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.810 [2024-04-17 10:29:27.891817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-04-17 10:29:27.892141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-04-17 10:29:27.892172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.810 [2024-04-17 10:29:27.892195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.810 [2024-04-17 10:29:27.892498] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.810 [2024-04-17 10:29:27.892679] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.810 [2024-04-17 10:29:27.892693] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.810 [2024-04-17 10:29:27.892709] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.810 [2024-04-17 10:29:27.896367] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.810 [2024-04-17 10:29:27.904765] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.810 [2024-04-17 10:29:27.905246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-04-17 10:29:27.905478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-04-17 10:29:27.905510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.810 [2024-04-17 10:29:27.905532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.810 [2024-04-17 10:29:27.905827] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.810 [2024-04-17 10:29:27.906003] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.810 [2024-04-17 10:29:27.906016] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.810 [2024-04-17 10:29:27.906025] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.810 [2024-04-17 10:29:27.908632] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.810 [2024-04-17 10:29:27.917923] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.810 [2024-04-17 10:29:27.918369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-04-17 10:29:27.918625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.810 [2024-04-17 10:29:27.918672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.810 [2024-04-17 10:29:27.918699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.810 [2024-04-17 10:29:27.918927] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.810 [2024-04-17 10:29:27.919058] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.811 [2024-04-17 10:29:27.919070] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.811 [2024-04-17 10:29:27.919080] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.811 [2024-04-17 10:29:27.921852] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.811 [2024-04-17 10:29:27.930890] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.811 [2024-04-17 10:29:27.931375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.811 [2024-04-17 10:29:27.931569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.811 [2024-04-17 10:29:27.931585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.811 [2024-04-17 10:29:27.931596] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.811 [2024-04-17 10:29:27.931777] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.811 [2024-04-17 10:29:27.931975] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.811 [2024-04-17 10:29:27.931988] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.811 [2024-04-17 10:29:27.931997] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.811 [2024-04-17 10:29:27.934736] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.811 [2024-04-17 10:29:27.943862] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.811 [2024-04-17 10:29:27.944380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.811 [2024-04-17 10:29:27.944564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.811 [2024-04-17 10:29:27.944596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.811 [2024-04-17 10:29:27.944619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.811 [2024-04-17 10:29:27.944932] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.811 [2024-04-17 10:29:27.945108] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.811 [2024-04-17 10:29:27.945121] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.811 [2024-04-17 10:29:27.945131] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.811 [2024-04-17 10:29:27.947859] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.811 [2024-04-17 10:29:27.956989] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.811 [2024-04-17 10:29:27.957415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.811 [2024-04-17 10:29:27.957674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.811 [2024-04-17 10:29:27.957691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.811 [2024-04-17 10:29:27.957701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.811 [2024-04-17 10:29:27.957875] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.811 [2024-04-17 10:29:27.958050] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.811 [2024-04-17 10:29:27.958063] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.811 [2024-04-17 10:29:27.958072] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.811 [2024-04-17 10:29:27.960826] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.811 [2024-04-17 10:29:27.969979] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.811 [2024-04-17 10:29:27.970441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.811 [2024-04-17 10:29:27.970678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.811 [2024-04-17 10:29:27.970711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.811 [2024-04-17 10:29:27.970734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.811 [2024-04-17 10:29:27.971211] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.811 [2024-04-17 10:29:27.971654] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.811 [2024-04-17 10:29:27.971681] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.811 [2024-04-17 10:29:27.971701] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.811 [2024-04-17 10:29:27.974733] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.811 [2024-04-17 10:29:27.983074] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.811 [2024-04-17 10:29:27.983508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.811 [2024-04-17 10:29:27.983809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.811 [2024-04-17 10:29:27.983843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.811 [2024-04-17 10:29:27.983865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.811 [2024-04-17 10:29:27.984084] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.811 [2024-04-17 10:29:27.984305] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.811 [2024-04-17 10:29:27.984318] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.811 [2024-04-17 10:29:27.984328] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.811 [2024-04-17 10:29:27.987275] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.811 [2024-04-17 10:29:27.995662] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.811 [2024-04-17 10:29:27.996067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.811 [2024-04-17 10:29:27.996223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.811 [2024-04-17 10:29:27.996238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.811 [2024-04-17 10:29:27.996249] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.811 [2024-04-17 10:29:27.996467] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.811 [2024-04-17 10:29:27.996651] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.811 [2024-04-17 10:29:27.996665] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.811 [2024-04-17 10:29:27.996675] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.811 [2024-04-17 10:29:27.999373] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3649079 Killed "${NVMF_APP[@]}" "$@" 00:32:54.811 10:29:28 -- host/bdevperf.sh@36 -- # tgt_init 00:32:54.811 10:29:28 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:54.811 10:29:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:54.811 10:29:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:54.811 10:29:28 -- common/autotest_common.sh@10 -- # set +x 00:32:54.811 [2024-04-17 10:29:28.008551] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.811 [2024-04-17 10:29:28.008863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.811 [2024-04-17 10:29:28.009031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.811 [2024-04-17 10:29:28.009047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.811 [2024-04-17 10:29:28.009057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.811 [2024-04-17 10:29:28.009276] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.811 [2024-04-17 10:29:28.009406] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.811 [2024-04-17 10:29:28.009419] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.811 [2024-04-17 10:29:28.009432] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.811 [2024-04-17 10:29:28.012046] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.811 10:29:28 -- nvmf/common.sh@469 -- # nvmfpid=3650548 00:32:54.811 10:29:28 -- nvmf/common.sh@470 -- # waitforlisten 3650548 00:32:54.811 10:29:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:54.811 10:29:28 -- common/autotest_common.sh@819 -- # '[' -z 3650548 ']' 00:32:54.811 10:29:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:54.811 10:29:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:54.811 10:29:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:54.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:54.811 10:29:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:54.811 10:29:28 -- common/autotest_common.sh@10 -- # set +x 00:32:54.811 [2024-04-17 10:29:28.021336] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.811 [2024-04-17 10:29:28.021827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.811 [2024-04-17 10:29:28.021979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.811 [2024-04-17 10:29:28.021996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.811 [2024-04-17 10:29:28.022007] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.811 [2024-04-17 10:29:28.022182] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.812 [2024-04-17 10:29:28.022357] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.812 [2024-04-17 10:29:28.022370] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.812 [2024-04-17 10:29:28.022379] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.812 [2024-04-17 10:29:28.025260] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.812 [2024-04-17 10:29:28.034253] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.812 [2024-04-17 10:29:28.034641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.812 [2024-04-17 10:29:28.034776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.812 [2024-04-17 10:29:28.034793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.812 [2024-04-17 10:29:28.034803] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.812 [2024-04-17 10:29:28.035023] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.812 [2024-04-17 10:29:28.035198] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.812 [2024-04-17 10:29:28.035210] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.812 [2024-04-17 10:29:28.035220] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.812 [2024-04-17 10:29:28.037947] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.812 [2024-04-17 10:29:28.047315] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.812 [2024-04-17 10:29:28.047730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.812 [2024-04-17 10:29:28.047956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.812 [2024-04-17 10:29:28.047976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.812 [2024-04-17 10:29:28.047987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.812 [2024-04-17 10:29:28.048185] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.812 [2024-04-17 10:29:28.048361] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.812 [2024-04-17 10:29:28.048373] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.812 [2024-04-17 10:29:28.048383] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.812 [2024-04-17 10:29:28.051044] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.812 [2024-04-17 10:29:28.057623] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:32:54.812 [2024-04-17 10:29:28.057682] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:54.812 [2024-04-17 10:29:28.060322] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.812 [2024-04-17 10:29:28.060744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.812 [2024-04-17 10:29:28.060950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.812 [2024-04-17 10:29:28.060966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.812 [2024-04-17 10:29:28.060978] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.812 [2024-04-17 10:29:28.061129] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.812 [2024-04-17 10:29:28.061305] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.812 [2024-04-17 10:29:28.061318] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.812 [2024-04-17 10:29:28.061328] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.812 [2024-04-17 10:29:28.063830] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.812 [2024-04-17 10:29:28.073278] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.812 [2024-04-17 10:29:28.073701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.812 [2024-04-17 10:29:28.073851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.812 [2024-04-17 10:29:28.073867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.812 [2024-04-17 10:29:28.073877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.812 [2024-04-17 10:29:28.074052] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.812 [2024-04-17 10:29:28.074249] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.812 [2024-04-17 10:29:28.074262] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.812 [2024-04-17 10:29:28.074271] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.812 [2024-04-17 10:29:28.077018] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.812 [2024-04-17 10:29:28.086294] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.812 [2024-04-17 10:29:28.086712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.812 [2024-04-17 10:29:28.086859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.812 [2024-04-17 10:29:28.086875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.812 [2024-04-17 10:29:28.086885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.812 [2024-04-17 10:29:28.087104] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.812 [2024-04-17 10:29:28.087303] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.812 [2024-04-17 10:29:28.087315] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.812 [2024-04-17 10:29:28.087325] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.812 [2024-04-17 10:29:28.089985] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.812 EAL: No free 2048 kB hugepages reported on node 1 00:32:54.812 [2024-04-17 10:29:28.099466] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.812 [2024-04-17 10:29:28.099975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.812 [2024-04-17 10:29:28.100199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.812 [2024-04-17 10:29:28.100215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.812 [2024-04-17 10:29:28.100226] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.812 [2024-04-17 10:29:28.100356] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.812 [2024-04-17 10:29:28.100598] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.812 [2024-04-17 10:29:28.100610] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.812 [2024-04-17 10:29:28.100620] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.812 [2024-04-17 10:29:28.103439] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.812 [2024-04-17 10:29:28.112716] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.812 [2024-04-17 10:29:28.113197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.812 [2024-04-17 10:29:28.113466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.812 [2024-04-17 10:29:28.113482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.812 [2024-04-17 10:29:28.113493] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.812 [2024-04-17 10:29:28.113677] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.812 [2024-04-17 10:29:28.113852] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.812 [2024-04-17 10:29:28.113864] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.812 [2024-04-17 10:29:28.113875] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.812 [2024-04-17 10:29:28.116711] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.812 [2024-04-17 10:29:28.125859] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.812 [2024-04-17 10:29:28.126402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.812 [2024-04-17 10:29:28.126596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.812 [2024-04-17 10:29:28.126612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.812 [2024-04-17 10:29:28.126624] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.812 [2024-04-17 10:29:28.126784] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.812 [2024-04-17 10:29:28.126982] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.812 [2024-04-17 10:29:28.126994] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.812 [2024-04-17 10:29:28.127004] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:54.812 [2024-04-17 10:29:28.129734] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.812 [2024-04-17 10:29:28.138124] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:54.812 [2024-04-17 10:29:28.139065] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:54.812 [2024-04-17 10:29:28.139508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.812 [2024-04-17 10:29:28.139761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.812 [2024-04-17 10:29:28.139778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:54.812 [2024-04-17 10:29:28.139789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:54.813 [2024-04-17 10:29:28.139964] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:54.813 [2024-04-17 10:29:28.140118] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:54.813 [2024-04-17 10:29:28.140131] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:54.813 [2024-04-17 10:29:28.140141] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.146 [2024-04-17 10:29:28.142760] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.146 [2024-04-17 10:29:28.152158] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.146 [2024-04-17 10:29:28.152583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.146 [2024-04-17 10:29:28.152786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.146 [2024-04-17 10:29:28.152803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.146 [2024-04-17 10:29:28.152813] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.146 [2024-04-17 10:29:28.152965] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.146 [2024-04-17 10:29:28.153140] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.146 [2024-04-17 10:29:28.153153] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.146 [2024-04-17 10:29:28.153163] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.146 [2024-04-17 10:29:28.155797] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.146 [2024-04-17 10:29:28.165177] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.146 [2024-04-17 10:29:28.165624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.146 [2024-04-17 10:29:28.165887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.146 [2024-04-17 10:29:28.165909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.146 [2024-04-17 10:29:28.165920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.146 [2024-04-17 10:29:28.166095] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.146 [2024-04-17 10:29:28.166272] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.146 [2024-04-17 10:29:28.166285] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.146 [2024-04-17 10:29:28.166295] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.146 [2024-04-17 10:29:28.168953] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.146 [2024-04-17 10:29:28.178054] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.146 [2024-04-17 10:29:28.178557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.146 [2024-04-17 10:29:28.178786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.146 [2024-04-17 10:29:28.178802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.146 [2024-04-17 10:29:28.178813] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.146 [2024-04-17 10:29:28.179010] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.146 [2024-04-17 10:29:28.179231] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.146 [2024-04-17 10:29:28.179244] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.146 [2024-04-17 10:29:28.179255] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.146 [2024-04-17 10:29:28.181980] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.146 [2024-04-17 10:29:28.191038] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.146 [2024-04-17 10:29:28.191599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.146 [2024-04-17 10:29:28.191830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.146 [2024-04-17 10:29:28.191848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.146 [2024-04-17 10:29:28.191858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.146 [2024-04-17 10:29:28.192033] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.146 [2024-04-17 10:29:28.192232] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.146 [2024-04-17 10:29:28.192244] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.146 [2024-04-17 10:29:28.192255] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.146 [2024-04-17 10:29:28.194801] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.146 [2024-04-17 10:29:28.204261] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.146 [2024-04-17 10:29:28.204649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.146 [2024-04-17 10:29:28.204799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.146 [2024-04-17 10:29:28.204815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.146 [2024-04-17 10:29:28.204831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.146 [2024-04-17 10:29:28.205004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.146 [2024-04-17 10:29:28.205157] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.146 [2024-04-17 10:29:28.205170] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.146 [2024-04-17 10:29:28.205179] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.146 [2024-04-17 10:29:28.207904] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.147 [2024-04-17 10:29:28.217238] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.147 [2024-04-17 10:29:28.217723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.147 [2024-04-17 10:29:28.217906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.147 [2024-04-17 10:29:28.217922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.147 [2024-04-17 10:29:28.217933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.147 [2024-04-17 10:29:28.218176] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.147 [2024-04-17 10:29:28.218398] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.147 [2024-04-17 10:29:28.218411] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.147 [2024-04-17 10:29:28.218420] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.147 [2024-04-17 10:29:28.221032] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.147 [2024-04-17 10:29:28.228607] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:55.147 [2024-04-17 10:29:28.228746] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:55.147 [2024-04-17 10:29:28.228759] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:55.147 [2024-04-17 10:29:28.228769] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:55.147 [2024-04-17 10:29:28.228816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:55.147 [2024-04-17 10:29:28.228931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:55.147 [2024-04-17 10:29:28.228932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:55.147 [2024-04-17 10:29:28.230340] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.147 [2024-04-17 10:29:28.230900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.147 [2024-04-17 10:29:28.231104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.147 [2024-04-17 10:29:28.231121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.147 [2024-04-17 10:29:28.231132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.147 [2024-04-17 10:29:28.231262] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.147 [2024-04-17 10:29:28.231459] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.147 [2024-04-17 10:29:28.231472] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.147 [2024-04-17 10:29:28.231482] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.147 [2024-04-17 10:29:28.234074] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.147 [2024-04-17 10:29:28.243541] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.147 [2024-04-17 10:29:28.243938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.147 [2024-04-17 10:29:28.244121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.147 [2024-04-17 10:29:28.244138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.147 [2024-04-17 10:29:28.244148] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.147 [2024-04-17 10:29:28.244324] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.147 [2024-04-17 10:29:28.244544] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.147 [2024-04-17 10:29:28.244558] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.147 [2024-04-17 10:29:28.244568] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.147 [2024-04-17 10:29:28.247317] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.147 [2024-04-17 10:29:28.256529] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.147 [2024-04-17 10:29:28.256881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.147 [2024-04-17 10:29:28.257078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.147 [2024-04-17 10:29:28.257094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.147 [2024-04-17 10:29:28.257105] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.147 [2024-04-17 10:29:28.257281] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.147 [2024-04-17 10:29:28.257456] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.147 [2024-04-17 10:29:28.257469] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.147 [2024-04-17 10:29:28.257479] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.147 [2024-04-17 10:29:28.260094] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.147 [2024-04-17 10:29:28.269733] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.147 [2024-04-17 10:29:28.270144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.147 [2024-04-17 10:29:28.270394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.147 [2024-04-17 10:29:28.270410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.147 [2024-04-17 10:29:28.270420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.147 [2024-04-17 10:29:28.270618] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.147 [2024-04-17 10:29:28.270801] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.147 [2024-04-17 10:29:28.270815] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.147 [2024-04-17 10:29:28.270824] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.147 [2024-04-17 10:29:28.273748] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.147 [2024-04-17 10:29:28.282381] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.147 [2024-04-17 10:29:28.282846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.147 [2024-04-17 10:29:28.283063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.147 [2024-04-17 10:29:28.283079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.147 [2024-04-17 10:29:28.283089] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.147 [2024-04-17 10:29:28.283242] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.147 [2024-04-17 10:29:28.283395] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.147 [2024-04-17 10:29:28.283409] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.147 [2024-04-17 10:29:28.283419] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.147 [2024-04-17 10:29:28.286052] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.147 [2024-04-17 10:29:28.295259] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.147 [2024-04-17 10:29:28.295722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.147 [2024-04-17 10:29:28.295922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.147 [2024-04-17 10:29:28.295938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.147 [2024-04-17 10:29:28.295949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.147 [2024-04-17 10:29:28.296124] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.147 [2024-04-17 10:29:28.296277] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.147 [2024-04-17 10:29:28.296290] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.147 [2024-04-17 10:29:28.296300] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.147 [2024-04-17 10:29:28.299072] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.147 [2024-04-17 10:29:28.308003] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.147 [2024-04-17 10:29:28.308553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.148 [2024-04-17 10:29:28.308808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.148 [2024-04-17 10:29:28.308824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.148 [2024-04-17 10:29:28.308835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.148 [2024-04-17 10:29:28.308987] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.148 [2024-04-17 10:29:28.309117] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.148 [2024-04-17 10:29:28.309130] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.148 [2024-04-17 10:29:28.309140] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.148 [2024-04-17 10:29:28.311660] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.148 [2024-04-17 10:29:28.321100] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.148 [2024-04-17 10:29:28.321512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.148 [2024-04-17 10:29:28.321771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.148 [2024-04-17 10:29:28.321788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.148 [2024-04-17 10:29:28.321800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.148 [2024-04-17 10:29:28.321952] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.148 [2024-04-17 10:29:28.322083] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.148 [2024-04-17 10:29:28.322095] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.148 [2024-04-17 10:29:28.322107] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.148 [2024-04-17 10:29:28.324673] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.148 [2024-04-17 10:29:28.334039] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.148 [2024-04-17 10:29:28.334534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.148 [2024-04-17 10:29:28.334809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.148 [2024-04-17 10:29:28.334828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.148 [2024-04-17 10:29:28.334840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.148 [2024-04-17 10:29:28.335084] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.148 [2024-04-17 10:29:28.335328] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.148 [2024-04-17 10:29:28.335341] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.148 [2024-04-17 10:29:28.335350] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.148 [2024-04-17 10:29:28.338256] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.148 [2024-04-17 10:29:28.347273] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.148 [2024-04-17 10:29:28.347738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.148 [2024-04-17 10:29:28.347890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.148 [2024-04-17 10:29:28.347907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.148 [2024-04-17 10:29:28.347918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.148 [2024-04-17 10:29:28.348117] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.148 [2024-04-17 10:29:28.348361] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.148 [2024-04-17 10:29:28.348374] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.148 [2024-04-17 10:29:28.348383] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.148 [2024-04-17 10:29:28.350926] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.148 [2024-04-17 10:29:28.359999] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.148 [2024-04-17 10:29:28.360455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.148 [2024-04-17 10:29:28.360664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.148 [2024-04-17 10:29:28.360682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.148 [2024-04-17 10:29:28.360692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.148 [2024-04-17 10:29:28.360867] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.148 [2024-04-17 10:29:28.360998] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.148 [2024-04-17 10:29:28.361010] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.148 [2024-04-17 10:29:28.361020] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.148 [2024-04-17 10:29:28.363865] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.148 [2024-04-17 10:29:28.373090] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.148 [2024-04-17 10:29:28.373516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.148 [2024-04-17 10:29:28.373723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.148 [2024-04-17 10:29:28.373741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.148 [2024-04-17 10:29:28.373752] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.148 [2024-04-17 10:29:28.373906] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.148 [2024-04-17 10:29:28.374105] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.148 [2024-04-17 10:29:28.374118] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.148 [2024-04-17 10:29:28.374128] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.148 [2024-04-17 10:29:28.376830] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.148 [2024-04-17 10:29:28.385966] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.148 [2024-04-17 10:29:28.386442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.148 [2024-04-17 10:29:28.386656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.148 [2024-04-17 10:29:28.386674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.148 [2024-04-17 10:29:28.386685] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.148 [2024-04-17 10:29:28.386860] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.148 [2024-04-17 10:29:28.387014] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.148 [2024-04-17 10:29:28.387027] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.148 [2024-04-17 10:29:28.387037] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.148 [2024-04-17 10:29:28.389596] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.148 [2024-04-17 10:29:28.398799] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.148 [2024-04-17 10:29:28.399258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.148 [2024-04-17 10:29:28.399454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.148 [2024-04-17 10:29:28.399470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.148 [2024-04-17 10:29:28.399485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.148 [2024-04-17 10:29:28.399687] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.148 [2024-04-17 10:29:28.399909] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.148 [2024-04-17 10:29:28.399922] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.148 [2024-04-17 10:29:28.399932] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.148 [2024-04-17 10:29:28.402603] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.148 [2024-04-17 10:29:28.411785] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.148 [2024-04-17 10:29:28.412223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.148 [2024-04-17 10:29:28.412496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.148 [2024-04-17 10:29:28.412511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.148 [2024-04-17 10:29:28.412524] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.149 [2024-04-17 10:29:28.412726] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.149 [2024-04-17 10:29:28.412924] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.149 [2024-04-17 10:29:28.412937] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.149 [2024-04-17 10:29:28.412947] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.149 [2024-04-17 10:29:28.415915] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.149 [2024-04-17 10:29:28.424371] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.149 [2024-04-17 10:29:28.424837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.149 [2024-04-17 10:29:28.425108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.149 [2024-04-17 10:29:28.425133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.149 [2024-04-17 10:29:28.425149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.149 [2024-04-17 10:29:28.425373] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.149 [2024-04-17 10:29:28.425641] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.149 [2024-04-17 10:29:28.425668] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.149 [2024-04-17 10:29:28.425683] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.149 [2024-04-17 10:29:28.428412] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.149 [2024-04-17 10:29:28.437658] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.149 [2024-04-17 10:29:28.438135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.149 [2024-04-17 10:29:28.438400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.149 [2024-04-17 10:29:28.438420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.149 [2024-04-17 10:29:28.438432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.149 [2024-04-17 10:29:28.438593] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.149 [2024-04-17 10:29:28.438803] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.149 [2024-04-17 10:29:28.438816] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.149 [2024-04-17 10:29:28.438826] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.149 [2024-04-17 10:29:28.441819] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.149 [2024-04-17 10:29:28.450676] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.149 [2024-04-17 10:29:28.451201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.149 [2024-04-17 10:29:28.451450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.149 [2024-04-17 10:29:28.451466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.149 [2024-04-17 10:29:28.451478] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.149 [2024-04-17 10:29:28.451630] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.149 [2024-04-17 10:29:28.451790] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.149 [2024-04-17 10:29:28.451804] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.149 [2024-04-17 10:29:28.451813] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.149 [2024-04-17 10:29:28.454441] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.149 [2024-04-17 10:29:28.463580] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.149 [2024-04-17 10:29:28.464065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.149 [2024-04-17 10:29:28.464344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.149 [2024-04-17 10:29:28.464360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.149 [2024-04-17 10:29:28.464371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.149 [2024-04-17 10:29:28.464546] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.149 [2024-04-17 10:29:28.464706] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.149 [2024-04-17 10:29:28.464720] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.149 [2024-04-17 10:29:28.464731] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.149 [2024-04-17 10:29:28.467604] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.149 [2024-04-17 10:29:28.476507] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.418 [2024-04-17 10:29:28.477013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.418 [2024-04-17 10:29:28.477209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.418 [2024-04-17 10:29:28.477224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.418 [2024-04-17 10:29:28.477235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.418 [2024-04-17 10:29:28.477437] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.418 [2024-04-17 10:29:28.477635] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.418 [2024-04-17 10:29:28.477654] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.418 [2024-04-17 10:29:28.477665] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.418 [2024-04-17 10:29:28.480405] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.418 [2024-04-17 10:29:28.489332] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.418 [2024-04-17 10:29:28.489810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.418 [2024-04-17 10:29:28.489992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.418 [2024-04-17 10:29:28.490008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.418 [2024-04-17 10:29:28.490019] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.418 [2024-04-17 10:29:28.490217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.418 [2024-04-17 10:29:28.490436] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.418 [2024-04-17 10:29:28.490449] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.418 [2024-04-17 10:29:28.490459] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.418 [2024-04-17 10:29:28.493245] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.418 [2024-04-17 10:29:28.502234] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.418 [2024-04-17 10:29:28.502626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.418 [2024-04-17 10:29:28.502824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.418 [2024-04-17 10:29:28.502840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.418 [2024-04-17 10:29:28.502851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.418 [2024-04-17 10:29:28.503002] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.418 [2024-04-17 10:29:28.503108] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.418 [2024-04-17 10:29:28.503121] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.418 [2024-04-17 10:29:28.503131] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.418 [2024-04-17 10:29:28.505738] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.418 [2024-04-17 10:29:28.515345] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.418 [2024-04-17 10:29:28.515779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.418 [2024-04-17 10:29:28.516003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.418 [2024-04-17 10:29:28.516019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.418 [2024-04-17 10:29:28.516030] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.418 [2024-04-17 10:29:28.516228] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.418 [2024-04-17 10:29:28.516339] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.418 [2024-04-17 10:29:28.516351] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.418 [2024-04-17 10:29:28.516361] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.418 [2024-04-17 10:29:28.518925] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.418 [2024-04-17 10:29:28.528287] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.418 [2024-04-17 10:29:28.528795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.418 [2024-04-17 10:29:28.528997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.418 [2024-04-17 10:29:28.529013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.418 [2024-04-17 10:29:28.529023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.418 [2024-04-17 10:29:28.529175] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.418 [2024-04-17 10:29:28.529373] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.418 [2024-04-17 10:29:28.529386] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.418 [2024-04-17 10:29:28.529396] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.418 [2024-04-17 10:29:28.532119] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.418 [2024-04-17 10:29:28.541339] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.418 [2024-04-17 10:29:28.541754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.418 [2024-04-17 10:29:28.542030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.418 [2024-04-17 10:29:28.542046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.418 [2024-04-17 10:29:28.542058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.418 [2024-04-17 10:29:28.542210] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.418 [2024-04-17 10:29:28.542407] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.418 [2024-04-17 10:29:28.542420] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.418 [2024-04-17 10:29:28.542430] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.418 [2024-04-17 10:29:28.545073] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.418 [2024-04-17 10:29:28.554585] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.418 [2024-04-17 10:29:28.554978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.418 [2024-04-17 10:29:28.555228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.418 [2024-04-17 10:29:28.555243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.418 [2024-04-17 10:29:28.555254] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.418 [2024-04-17 10:29:28.555406] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.418 [2024-04-17 10:29:28.555558] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.418 [2024-04-17 10:29:28.555571] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.418 [2024-04-17 10:29:28.555588] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.418 [2024-04-17 10:29:28.558356] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.418 [2024-04-17 10:29:28.567561] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.418 [2024-04-17 10:29:28.567907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.568110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.568126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.419 [2024-04-17 10:29:28.568137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.419 [2024-04-17 10:29:28.568290] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.419 [2024-04-17 10:29:28.568487] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.419 [2024-04-17 10:29:28.568500] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.419 [2024-04-17 10:29:28.568510] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.419 [2024-04-17 10:29:28.571006] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.419 [2024-04-17 10:29:28.580456] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.419 [2024-04-17 10:29:28.580830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.581110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.581125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.419 [2024-04-17 10:29:28.581136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.419 [2024-04-17 10:29:28.581378] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.419 [2024-04-17 10:29:28.581485] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.419 [2024-04-17 10:29:28.581497] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.419 [2024-04-17 10:29:28.581507] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.419 [2024-04-17 10:29:28.584250] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.419 [2024-04-17 10:29:28.593586] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.419 [2024-04-17 10:29:28.594071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.594347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.594363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.419 [2024-04-17 10:29:28.594374] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.419 [2024-04-17 10:29:28.594548] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.419 [2024-04-17 10:29:28.594686] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.419 [2024-04-17 10:29:28.594699] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.419 [2024-04-17 10:29:28.594712] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.419 [2024-04-17 10:29:28.597629] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.419 [2024-04-17 10:29:28.606465] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.419 [2024-04-17 10:29:28.606860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.607115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.607132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.419 [2024-04-17 10:29:28.607142] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.419 [2024-04-17 10:29:28.607316] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.419 [2024-04-17 10:29:28.607423] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.419 [2024-04-17 10:29:28.607435] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.419 [2024-04-17 10:29:28.607445] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.419 [2024-04-17 10:29:28.610004] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.419 [2024-04-17 10:29:28.619380] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.419 [2024-04-17 10:29:28.619827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.620048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.620064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.419 [2024-04-17 10:29:28.620075] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.419 [2024-04-17 10:29:28.620272] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.419 [2024-04-17 10:29:28.620447] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.419 [2024-04-17 10:29:28.620460] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.419 [2024-04-17 10:29:28.620470] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.419 [2024-04-17 10:29:28.623218] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.419 [2024-04-17 10:29:28.632278] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.419 [2024-04-17 10:29:28.632620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.632904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.632921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.419 [2024-04-17 10:29:28.632932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.419 [2024-04-17 10:29:28.633152] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.419 [2024-04-17 10:29:28.633327] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.419 [2024-04-17 10:29:28.633339] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.419 [2024-04-17 10:29:28.633349] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.419 [2024-04-17 10:29:28.635845] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.419 [2024-04-17 10:29:28.645327] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.419 [2024-04-17 10:29:28.645868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.646065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.646080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.419 [2024-04-17 10:29:28.646091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.419 [2024-04-17 10:29:28.646221] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.419 [2024-04-17 10:29:28.646394] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.419 [2024-04-17 10:29:28.646406] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.419 [2024-04-17 10:29:28.646416] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.419 [2024-04-17 10:29:28.649166] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.419 [2024-04-17 10:29:28.658359] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.419 [2024-04-17 10:29:28.658820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.659101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.659117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.419 [2024-04-17 10:29:28.659129] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.419 [2024-04-17 10:29:28.659303] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.419 [2024-04-17 10:29:28.659524] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.419 [2024-04-17 10:29:28.659536] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.419 [2024-04-17 10:29:28.659546] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.419 [2024-04-17 10:29:28.662309] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.419 [2024-04-17 10:29:28.671366] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.419 [2024-04-17 10:29:28.671841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.672129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.672144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.419 [2024-04-17 10:29:28.672155] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.419 [2024-04-17 10:29:28.672329] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.419 [2024-04-17 10:29:28.672481] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.419 [2024-04-17 10:29:28.672494] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.419 [2024-04-17 10:29:28.672503] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.419 [2024-04-17 10:29:28.675291] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.419 [2024-04-17 10:29:28.684326] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.419 [2024-04-17 10:29:28.684788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.685062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.685078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.419 [2024-04-17 10:29:28.685089] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.419 [2024-04-17 10:29:28.685263] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.419 [2024-04-17 10:29:28.685415] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.419 [2024-04-17 10:29:28.685428] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.419 [2024-04-17 10:29:28.685437] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.419 [2024-04-17 10:29:28.687982] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.419 [2024-04-17 10:29:28.697293] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.419 [2024-04-17 10:29:28.697701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.697902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.697918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.419 [2024-04-17 10:29:28.697929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.419 [2024-04-17 10:29:28.698125] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.419 [2024-04-17 10:29:28.698278] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.419 [2024-04-17 10:29:28.698290] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.419 [2024-04-17 10:29:28.698300] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.419 [2024-04-17 10:29:28.701068] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.419 [2024-04-17 10:29:28.710350] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.419 [2024-04-17 10:29:28.710828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.711102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.711117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.419 [2024-04-17 10:29:28.711127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.419 [2024-04-17 10:29:28.711256] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.419 [2024-04-17 10:29:28.711431] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.419 [2024-04-17 10:29:28.711444] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.419 [2024-04-17 10:29:28.711454] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.419 [2024-04-17 10:29:28.714059] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.419 [2024-04-17 10:29:28.723507] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.419 [2024-04-17 10:29:28.723970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.724226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.724242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.419 [2024-04-17 10:29:28.724252] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.419 [2024-04-17 10:29:28.724381] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.419 [2024-04-17 10:29:28.724654] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.419 [2024-04-17 10:29:28.724667] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.419 [2024-04-17 10:29:28.724677] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.419 [2024-04-17 10:29:28.727391] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.419 [2024-04-17 10:29:28.736520] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.419 [2024-04-17 10:29:28.737025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.737223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.419 [2024-04-17 10:29:28.737238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.419 [2024-04-17 10:29:28.737249] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.419 [2024-04-17 10:29:28.737448] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.419 [2024-04-17 10:29:28.737672] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.419 [2024-04-17 10:29:28.737685] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.419 [2024-04-17 10:29:28.737695] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.420 [2024-04-17 10:29:28.740566] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.681 [2024-04-17 10:29:28.749400] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.681 [2024-04-17 10:29:28.749925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.681 [2024-04-17 10:29:28.750128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.681 [2024-04-17 10:29:28.750143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.681 [2024-04-17 10:29:28.750154] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.681 [2024-04-17 10:29:28.750283] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.681 [2024-04-17 10:29:28.750502] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.681 [2024-04-17 10:29:28.750514] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.681 [2024-04-17 10:29:28.750524] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.681 [2024-04-17 10:29:28.753111] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.681 [2024-04-17 10:29:28.762448] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.681 [2024-04-17 10:29:28.762888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.681 [2024-04-17 10:29:28.763035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.681 [2024-04-17 10:29:28.763051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.681 [2024-04-17 10:29:28.763066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.681 [2024-04-17 10:29:28.763240] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.681 [2024-04-17 10:29:28.763392] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.681 [2024-04-17 10:29:28.763405] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.681 [2024-04-17 10:29:28.763414] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.681 [2024-04-17 10:29:28.766135] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.681 [2024-04-17 10:29:28.775379] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.681 [2024-04-17 10:29:28.775818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.681 [2024-04-17 10:29:28.776096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.681 [2024-04-17 10:29:28.776112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.681 [2024-04-17 10:29:28.776122] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.681 [2024-04-17 10:29:28.776341] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.682 [2024-04-17 10:29:28.776471] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.682 [2024-04-17 10:29:28.776484] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.682 [2024-04-17 10:29:28.776494] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.682 [2024-04-17 10:29:28.779151] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.682 [2024-04-17 10:29:28.788462] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.682 [2024-04-17 10:29:28.788887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.682 [2024-04-17 10:29:28.789088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.682 [2024-04-17 10:29:28.789105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.682 [2024-04-17 10:29:28.789115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.682 [2024-04-17 10:29:28.789312] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.682 [2024-04-17 10:29:28.789486] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.682 [2024-04-17 10:29:28.789498] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.682 [2024-04-17 10:29:28.789508] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.682 [2024-04-17 10:29:28.792273] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.682 [2024-04-17 10:29:28.801460] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.682 [2024-04-17 10:29:28.801940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.682 [2024-04-17 10:29:28.802196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.682 [2024-04-17 10:29:28.802212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.682 [2024-04-17 10:29:28.802226] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.682 [2024-04-17 10:29:28.802445] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.682 [2024-04-17 10:29:28.802598] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.682 [2024-04-17 10:29:28.802610] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.682 [2024-04-17 10:29:28.802619] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.682 [2024-04-17 10:29:28.805362] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.682 [2024-04-17 10:29:28.814351] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.682 [2024-04-17 10:29:28.814754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.682 [2024-04-17 10:29:28.814958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.682 [2024-04-17 10:29:28.814974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.682 [2024-04-17 10:29:28.814984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.682 [2024-04-17 10:29:28.815135] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.682 [2024-04-17 10:29:28.815333] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.682 [2024-04-17 10:29:28.815346] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.682 [2024-04-17 10:29:28.815356] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.682 [2024-04-17 10:29:28.818574] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.682 [2024-04-17 10:29:28.827296] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.682 [2024-04-17 10:29:28.827735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.682 [2024-04-17 10:29:28.827886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.682 [2024-04-17 10:29:28.827902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.682 [2024-04-17 10:29:28.827912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.682 [2024-04-17 10:29:28.828110] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.682 [2024-04-17 10:29:28.828329] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.682 [2024-04-17 10:29:28.828342] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.682 [2024-04-17 10:29:28.828351] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.682 [2024-04-17 10:29:28.831116] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.682 [2024-04-17 10:29:28.840272] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.682 [2024-04-17 10:29:28.840798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.682 [2024-04-17 10:29:28.841011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.682 [2024-04-17 10:29:28.841027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.682 [2024-04-17 10:29:28.841037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.682 [2024-04-17 10:29:28.841171] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.682 [2024-04-17 10:29:28.841367] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.682 [2024-04-17 10:29:28.841380] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.682 [2024-04-17 10:29:28.841391] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.682 [2024-04-17 10:29:28.844030] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.682 [2024-04-17 10:29:28.853097] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.682 [2024-04-17 10:29:28.853598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.682 [2024-04-17 10:29:28.853782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.682 [2024-04-17 10:29:28.853798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.682 [2024-04-17 10:29:28.853809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.682 [2024-04-17 10:29:28.854006] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.682 [2024-04-17 10:29:28.854181] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.682 [2024-04-17 10:29:28.854194] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.682 [2024-04-17 10:29:28.854203] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.682 [2024-04-17 10:29:28.856947] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.682 [2024-04-17 10:29:28.866117] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.682 [2024-04-17 10:29:28.866565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.682 [2024-04-17 10:29:28.866846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.682 [2024-04-17 10:29:28.866862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.682 [2024-04-17 10:29:28.866873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.682 [2024-04-17 10:29:28.867092] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.682 [2024-04-17 10:29:28.867223] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.682 [2024-04-17 10:29:28.867235] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.682 [2024-04-17 10:29:28.867244] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.682 [2024-04-17 10:29:28.870011] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.682 [2024-04-17 10:29:28.879110] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.682 [2024-04-17 10:29:28.879579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.682 [2024-04-17 10:29:28.879763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.682 [2024-04-17 10:29:28.879780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.682 [2024-04-17 10:29:28.879791] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.682 [2024-04-17 10:29:28.879921] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.682 [2024-04-17 10:29:28.880030] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.682 [2024-04-17 10:29:28.880043] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.682 [2024-04-17 10:29:28.880053] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.682 [2024-04-17 10:29:28.882684] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.682 [2024-04-17 10:29:28.892061] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.682 [2024-04-17 10:29:28.892484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.682 [2024-04-17 10:29:28.892758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.682 [2024-04-17 10:29:28.892774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.682 [2024-04-17 10:29:28.892785] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.682 [2024-04-17 10:29:28.892959] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.682 [2024-04-17 10:29:28.893111] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.682 [2024-04-17 10:29:28.893124] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.682 [2024-04-17 10:29:28.893134] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.682 [2024-04-17 10:29:28.895699] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.682 [2024-04-17 10:29:28.904873] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.682 [2024-04-17 10:29:28.905156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.682 [2024-04-17 10:29:28.905430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.682 [2024-04-17 10:29:28.905446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.682 [2024-04-17 10:29:28.905456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.682 [2024-04-17 10:29:28.905608] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.683 [2024-04-17 10:29:28.905810] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.683 [2024-04-17 10:29:28.905823] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.683 [2024-04-17 10:29:28.905834] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.683 [2024-04-17 10:29:28.908639] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.683 [2024-04-17 10:29:28.917902] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.683 [2024-04-17 10:29:28.918383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.683 [2024-04-17 10:29:28.918584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.683 [2024-04-17 10:29:28.918600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.683 [2024-04-17 10:29:28.918611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.683 [2024-04-17 10:29:28.918723] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.683 [2024-04-17 10:29:28.918921] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.683 [2024-04-17 10:29:28.918937] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.683 [2024-04-17 10:29:28.918947] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.683 [2024-04-17 10:29:28.921418] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.683 [2024-04-17 10:29:28.931052] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.683 [2024-04-17 10:29:28.931409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.683 [2024-04-17 10:29:28.931691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.683 [2024-04-17 10:29:28.931708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.683 [2024-04-17 10:29:28.931719] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.683 [2024-04-17 10:29:28.931872] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.683 [2024-04-17 10:29:28.932003] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.683 [2024-04-17 10:29:28.932015] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.683 [2024-04-17 10:29:28.932025] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.683 [2024-04-17 10:29:28.934720] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.683 [2024-04-17 10:29:28.944222] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.683 [2024-04-17 10:29:28.944642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.683 [2024-04-17 10:29:28.944924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.683 [2024-04-17 10:29:28.944941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.683 [2024-04-17 10:29:28.944951] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.683 [2024-04-17 10:29:28.945125] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.683 [2024-04-17 10:29:28.945277] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.683 [2024-04-17 10:29:28.945290] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.683 [2024-04-17 10:29:28.945300] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.683 [2024-04-17 10:29:28.947977] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.683 [2024-04-17 10:29:28.956987] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.683 [2024-04-17 10:29:28.957352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.683 [2024-04-17 10:29:28.957627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.683 [2024-04-17 10:29:28.957649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.683 [2024-04-17 10:29:28.957661] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.683 [2024-04-17 10:29:28.957813] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.683 [2024-04-17 10:29:28.957988] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.683 [2024-04-17 10:29:28.958001] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.683 [2024-04-17 10:29:28.958013] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.683 [2024-04-17 10:29:28.960639] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.683 [2024-04-17 10:29:28.970025] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.683 [2024-04-17 10:29:28.970452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.683 [2024-04-17 10:29:28.970776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.683 [2024-04-17 10:29:28.970793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.683 [2024-04-17 10:29:28.970804] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.683 [2024-04-17 10:29:28.970978] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.683 [2024-04-17 10:29:28.971130] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.683 [2024-04-17 10:29:28.971143] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.683 [2024-04-17 10:29:28.971153] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.683 [2024-04-17 10:29:28.974030] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.683 [2024-04-17 10:29:28.982966] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.683 [2024-04-17 10:29:28.983447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.683 [2024-04-17 10:29:28.983609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.683 [2024-04-17 10:29:28.983624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.683 [2024-04-17 10:29:28.983634] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.683 [2024-04-17 10:29:28.983859] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.683 [2024-04-17 10:29:28.984013] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.683 [2024-04-17 10:29:28.984025] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.683 [2024-04-17 10:29:28.984035] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.683 [2024-04-17 10:29:28.986663] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.683 [2024-04-17 10:29:28.996149] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.683 [2024-04-17 10:29:28.996626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.683 [2024-04-17 10:29:28.996828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.683 [2024-04-17 10:29:28.996844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.683 [2024-04-17 10:29:28.996856] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.683 [2024-04-17 10:29:28.997052] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.683 [2024-04-17 10:29:28.997182] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.683 [2024-04-17 10:29:28.997195] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.683 [2024-04-17 10:29:28.997205] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.683 [2024-04-17 10:29:28.999838] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.683 10:29:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:55.683 10:29:28 -- common/autotest_common.sh@852 -- # return 0 00:32:55.683 10:29:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:55.683 10:29:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:55.683 10:29:29 -- common/autotest_common.sh@10 -- # set +x 00:32:55.683 [2024-04-17 10:29:29.008845] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.683 [2024-04-17 10:29:29.009345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.683 [2024-04-17 10:29:29.009525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.683 [2024-04-17 10:29:29.009541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.683 [2024-04-17 10:29:29.009554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.683 [2024-04-17 10:29:29.009688] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.683 [2024-04-17 10:29:29.009840] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.683 [2024-04-17 10:29:29.009853] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.683 [2024-04-17 10:29:29.009862] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.942 [2024-04-17 10:29:29.012764] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.942 [2024-04-17 10:29:29.021695] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.942 [2024-04-17 10:29:29.022165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.942 [2024-04-17 10:29:29.022419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.942 [2024-04-17 10:29:29.022434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.942 [2024-04-17 10:29:29.022444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.942 [2024-04-17 10:29:29.022596] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.942 [2024-04-17 10:29:29.022800] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.942 [2024-04-17 10:29:29.022814] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.942 [2024-04-17 10:29:29.022824] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.942 [2024-04-17 10:29:29.025320] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.943 [2024-04-17 10:29:29.034978] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.943 [2024-04-17 10:29:29.035250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.943 [2024-04-17 10:29:29.035446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.943 [2024-04-17 10:29:29.035462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.943 [2024-04-17 10:29:29.035475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.943 [2024-04-17 10:29:29.035676] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.943 [2024-04-17 10:29:29.035829] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.943 [2024-04-17 10:29:29.035842] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.943 [2024-04-17 10:29:29.035858] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.943 10:29:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:55.943 10:29:29 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:55.943 10:29:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:55.943 10:29:29 -- common/autotest_common.sh@10 -- # set +x 00:32:55.943 [2024-04-17 10:29:29.038575] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.943 [2024-04-17 10:29:29.044276] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:55.943 [2024-04-17 10:29:29.047817] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.943 [2024-04-17 10:29:29.048193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.943 [2024-04-17 10:29:29.048359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.943 [2024-04-17 10:29:29.048375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.943 [2024-04-17 10:29:29.048386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.943 [2024-04-17 10:29:29.048515] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.943 [2024-04-17 10:29:29.048741] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.943 [2024-04-17 10:29:29.048755] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.943 [2024-04-17 10:29:29.048765] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.943 10:29:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:55.943 10:29:29 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:55.943 10:29:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:55.943 10:29:29 -- common/autotest_common.sh@10 -- # set +x 00:32:55.943 [2024-04-17 10:29:29.051635] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.943 [2024-04-17 10:29:29.061004] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.943 [2024-04-17 10:29:29.061423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.943 [2024-04-17 10:29:29.061682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.943 [2024-04-17 10:29:29.061699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.943 [2024-04-17 10:29:29.061711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.943 [2024-04-17 10:29:29.061908] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.943 [2024-04-17 10:29:29.062083] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.943 [2024-04-17 10:29:29.062095] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.943 [2024-04-17 10:29:29.062105] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.943 [2024-04-17 10:29:29.064869] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.943 [2024-04-17 10:29:29.073897] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.943 [2024-04-17 10:29:29.074324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.943 [2024-04-17 10:29:29.074600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.943 [2024-04-17 10:29:29.074615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.943 [2024-04-17 10:29:29.074626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.943 [2024-04-17 10:29:29.074835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.943 [2024-04-17 10:29:29.075057] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.943 [2024-04-17 10:29:29.075069] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.943 [2024-04-17 10:29:29.075079] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.943 [2024-04-17 10:29:29.077870] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.943 [2024-04-17 10:29:29.086599] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.943 Malloc0 00:32:55.943 [2024-04-17 10:29:29.087085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.943 [2024-04-17 10:29:29.087308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.943 [2024-04-17 10:29:29.087323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.943 [2024-04-17 10:29:29.087333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.943 [2024-04-17 10:29:29.087485] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.943 10:29:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:55.943 [2024-04-17 10:29:29.087667] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.943 [2024-04-17 10:29:29.087681] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.943 [2024-04-17 10:29:29.087691] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.943 10:29:29 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:55.943 10:29:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:55.943 10:29:29 -- common/autotest_common.sh@10 -- # set +x 00:32:55.943 [2024-04-17 10:29:29.090678] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.943 10:29:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:55.943 10:29:29 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:55.943 [2024-04-17 10:29:29.099760] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.943 10:29:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:55.943 10:29:29 -- common/autotest_common.sh@10 -- # set +x 00:32:55.943 [2024-04-17 10:29:29.100213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.943 [2024-04-17 10:29:29.100465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:55.943 [2024-04-17 10:29:29.100481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efee40 with addr=10.0.0.2, port=4420 00:32:55.943 [2024-04-17 10:29:29.100491] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1efee40 is same with the state(5) to be set 00:32:55.943 [2024-04-17 10:29:29.100673] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efee40 (9): Bad file descriptor 00:32:55.943 [2024-04-17 10:29:29.100870] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:55.944 [2024-04-17 10:29:29.100883] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:55.944 [2024-04-17 10:29:29.100893] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:55.944 [2024-04-17 10:29:29.103249] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:55.944 10:29:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:55.944 10:29:29 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:55.944 10:29:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:55.944 10:29:29 -- common/autotest_common.sh@10 -- # set +x 00:32:55.944 [2024-04-17 10:29:29.110694] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:55.944 [2024-04-17 10:29:29.112832] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:55.944 10:29:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:55.944 10:29:29 -- host/bdevperf.sh@38 -- # wait 3649478 00:32:55.944 [2024-04-17 10:29:29.263437] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:05.916 00:33:05.916 Latency(us) 00:33:05.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:05.917 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:05.917 Verification LBA range: start 0x0 length 0x4000 00:33:05.917 Nvme1n1 : 15.01 8278.31 32.34 13122.46 0.00 5962.66 927.19 20614.05 00:33:05.917 =================================================================================================================== 00:33:05.917 Total : 8278.31 32.34 13122.46 0.00 5962.66 927.19 20614.05 00:33:05.917 10:29:37 -- host/bdevperf.sh@39 -- # sync 00:33:05.917 10:29:37 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:05.917 10:29:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:05.917 10:29:37 -- common/autotest_common.sh@10 -- # set +x 00:33:05.917 10:29:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:05.917 10:29:37 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:05.917 10:29:37 -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:05.917 10:29:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:05.917 10:29:37 -- nvmf/common.sh@116 -- # sync 00:33:05.917 10:29:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:05.917 10:29:37 -- nvmf/common.sh@119 -- # set +e 00:33:05.917 10:29:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:05.917 10:29:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:05.917 rmmod nvme_tcp 00:33:05.917 rmmod nvme_fabrics 00:33:05.917 rmmod nvme_keyring 00:33:05.917 10:29:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:05.917 10:29:37 -- nvmf/common.sh@123 -- # set -e 00:33:05.917 10:29:37 -- nvmf/common.sh@124 -- # return 0 00:33:05.917 10:29:37 -- nvmf/common.sh@477 -- # '[' -n 3650548 ']' 00:33:05.917 10:29:37 -- nvmf/common.sh@478 -- # killprocess 3650548 00:33:05.917 10:29:37 -- common/autotest_common.sh@926 -- # '[' -z 3650548 ']' 00:33:05.917 10:29:37 -- common/autotest_common.sh@930 -- # kill -0 3650548 00:33:05.917 10:29:37 -- common/autotest_common.sh@931 -- # uname 00:33:05.917 10:29:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:05.917 10:29:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3650548 00:33:05.917 10:29:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:05.917 10:29:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:05.917 10:29:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3650548' 00:33:05.917 killing process with pid 3650548 00:33:05.917 10:29:37 -- common/autotest_common.sh@945 -- # kill 3650548 00:33:05.917 10:29:37 -- common/autotest_common.sh@950 -- # wait 3650548 00:33:05.917 10:29:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:05.917 10:29:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:05.917 10:29:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:05.917 10:29:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:05.917 10:29:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:05.917 10:29:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.917 10:29:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:05.917 10:29:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:06.855 10:29:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:06.855 00:33:06.855 real 0m26.544s 00:33:06.855 user 1m3.627s 00:33:06.855 sys 0m6.335s 00:33:06.855 10:29:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:06.855 10:29:40 -- common/autotest_common.sh@10 -- # set +x 00:33:06.855 ************************************ 00:33:06.855 END TEST nvmf_bdevperf 00:33:06.855 ************************************ 00:33:06.855 10:29:40 -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:06.855 10:29:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:06.855 10:29:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:06.855 10:29:40 -- common/autotest_common.sh@10 -- # set +x 00:33:06.855 ************************************ 00:33:06.855 START TEST nvmf_target_disconnect 00:33:06.855 ************************************ 00:33:06.855 10:29:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:07.114 * Looking for test storage... 00:33:07.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:07.114 10:29:40 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:07.114 10:29:40 -- nvmf/common.sh@7 -- # uname -s 00:33:07.114 10:29:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:07.114 10:29:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:07.114 10:29:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:07.114 10:29:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:07.114 10:29:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:07.114 10:29:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:07.114 10:29:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:07.114 10:29:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:07.114 10:29:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:07.114 10:29:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:07.114 10:29:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:33:07.114 10:29:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:33:07.114 10:29:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:07.114 10:29:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:07.114 10:29:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:07.114 10:29:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:07.114 10:29:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:07.114 10:29:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:07.114 10:29:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:07.114 10:29:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.115 10:29:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.115 10:29:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.115 10:29:40 -- paths/export.sh@5 -- # export PATH 00:33:07.115 10:29:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.115 10:29:40 -- nvmf/common.sh@46 -- # : 0 00:33:07.115 10:29:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:07.115 10:29:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:07.115 10:29:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:07.115 10:29:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:07.115 10:29:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:07.115 10:29:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:07.115 10:29:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:07.115 10:29:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:07.115 10:29:40 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:07.115 10:29:40 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:07.115 10:29:40 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:07.115 10:29:40 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:33:07.115 10:29:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:07.115 10:29:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:07.115 10:29:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:07.115 10:29:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:07.115 10:29:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:07.115 10:29:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.115 10:29:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:07.115 10:29:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.115 10:29:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:07.115 10:29:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:07.115 10:29:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:07.115 10:29:40 -- common/autotest_common.sh@10 -- # set +x 00:33:12.391 10:29:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:12.391 10:29:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:12.391 10:29:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:12.391 10:29:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:12.391 10:29:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:12.391 10:29:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:12.391 10:29:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:12.391 10:29:45 -- nvmf/common.sh@294 -- # net_devs=() 00:33:12.391 10:29:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:12.391 10:29:45 -- nvmf/common.sh@295 -- # e810=() 00:33:12.391 10:29:45 -- nvmf/common.sh@295 -- # local -ga e810 00:33:12.391 10:29:45 -- nvmf/common.sh@296 -- # x722=() 00:33:12.391 10:29:45 -- nvmf/common.sh@296 -- # local -ga x722 00:33:12.391 10:29:45 -- nvmf/common.sh@297 -- # mlx=() 00:33:12.392 10:29:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:12.392 10:29:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:12.392 10:29:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:12.392 10:29:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:12.392 10:29:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:12.392 10:29:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:12.392 10:29:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:12.392 10:29:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:12.392 10:29:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:12.392 10:29:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:12.392 10:29:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:12.392 10:29:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:12.392 10:29:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:12.392 10:29:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:12.392 10:29:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:12.392 10:29:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:12.392 10:29:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:12.392 10:29:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:12.392 10:29:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:12.392 10:29:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:12.392 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:12.392 10:29:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:12.392 10:29:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:12.392 10:29:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:12.392 10:29:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:12.392 10:29:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:12.392 10:29:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:12.392 10:29:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:12.392 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:12.392 10:29:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:12.392 10:29:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:12.392 10:29:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:12.392 10:29:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:12.392 10:29:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:12.392 10:29:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:12.392 10:29:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:12.392 10:29:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:12.392 10:29:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:12.392 10:29:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:12.392 10:29:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:12.392 10:29:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:12.392 10:29:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:12.392 Found net devices under 0000:af:00.0: cvl_0_0 00:33:12.392 10:29:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:12.392 10:29:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:12.392 10:29:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:12.392 10:29:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:12.392 10:29:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:12.392 10:29:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:12.392 Found net devices under 0000:af:00.1: cvl_0_1 00:33:12.392 10:29:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:12.392 10:29:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:12.392 10:29:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:12.392 10:29:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:12.392 10:29:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:12.392 10:29:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:12.392 10:29:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:12.392 10:29:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:12.392 10:29:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:12.392 10:29:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:12.392 10:29:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:12.392 10:29:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:12.392 10:29:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:12.392 10:29:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:12.392 10:29:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:12.392 10:29:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:12.392 10:29:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:12.392 10:29:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:12.392 10:29:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:12.651 10:29:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:12.651 10:29:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:12.651 10:29:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:12.651 10:29:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:12.651 10:29:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:12.651 10:29:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:12.651 10:29:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:12.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:12.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:33:12.651 00:33:12.651 --- 10.0.0.2 ping statistics --- 00:33:12.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.651 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:33:12.651 10:29:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:12.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:12.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:33:12.651 00:33:12.651 --- 10.0.0.1 ping statistics --- 00:33:12.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.651 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:33:12.651 10:29:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:12.651 10:29:45 -- nvmf/common.sh@410 -- # return 0 00:33:12.651 10:29:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:12.651 10:29:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:12.651 10:29:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:12.651 10:29:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:12.652 10:29:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:12.652 10:29:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:12.652 10:29:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:12.652 10:29:45 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:12.652 10:29:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:12.652 10:29:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:12.652 10:29:45 -- common/autotest_common.sh@10 -- # set +x 00:33:12.652 ************************************ 00:33:12.652 START TEST nvmf_target_disconnect_tc1 00:33:12.652 ************************************ 00:33:12.652 10:29:45 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:33:12.652 10:29:45 -- host/target_disconnect.sh@32 -- # set +e 00:33:12.652 10:29:45 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:12.652 EAL: No free 2048 kB hugepages reported on node 1 00:33:12.911 [2024-04-17 10:29:46.032890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.911 [2024-04-17 10:29:46.033141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.911 [2024-04-17 10:29:46.033178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dcca60 with addr=10.0.0.2, port=4420 00:33:12.911 [2024-04-17 10:29:46.033222] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:12.911 [2024-04-17 10:29:46.033247] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:12.911 [2024-04-17 10:29:46.033266] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:33:12.911 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:33:12.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:12.911 Initializing NVMe Controllers 00:33:12.911 10:29:46 -- host/target_disconnect.sh@33 -- # trap - ERR 00:33:12.911 10:29:46 -- host/target_disconnect.sh@33 -- # print_backtrace 00:33:12.911 10:29:46 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:33:12.911 10:29:46 -- common/autotest_common.sh@1132 -- # return 0 00:33:12.911 10:29:46 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:33:12.911 10:29:46 -- host/target_disconnect.sh@41 -- # set -e 00:33:12.911 00:33:12.911 real 0m0.118s 00:33:12.911 user 0m0.049s 00:33:12.911 sys 0m0.068s 00:33:12.911 10:29:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:12.911 10:29:46 -- common/autotest_common.sh@10 -- # set +x 00:33:12.911 ************************************ 00:33:12.911 END TEST nvmf_target_disconnect_tc1 00:33:12.911 ************************************ 00:33:12.911 10:29:46 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:12.911 10:29:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:12.911 10:29:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:12.911 10:29:46 -- common/autotest_common.sh@10 -- # set +x 00:33:12.911 ************************************ 00:33:12.911 START TEST nvmf_target_disconnect_tc2 00:33:12.911 ************************************ 00:33:12.911 10:29:46 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:33:12.911 10:29:46 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:33:12.911 10:29:46 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:12.911 10:29:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:12.911 10:29:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:12.911 10:29:46 -- common/autotest_common.sh@10 -- # set +x 00:33:12.911 10:29:46 -- nvmf/common.sh@469 -- # nvmfpid=3655859 00:33:12.911 10:29:46 -- nvmf/common.sh@470 -- # waitforlisten 3655859 00:33:12.911 10:29:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:12.911 10:29:46 -- common/autotest_common.sh@819 -- # '[' -z 3655859 ']' 00:33:12.911 10:29:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:12.911 10:29:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:12.911 10:29:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:12.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:12.911 10:29:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:12.911 10:29:46 -- common/autotest_common.sh@10 -- # set +x 00:33:12.911 [2024-04-17 10:29:46.140153] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:12.911 [2024-04-17 10:29:46.140206] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:12.911 EAL: No free 2048 kB hugepages reported on node 1 00:33:12.911 [2024-04-17 10:29:46.216363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:13.170 [2024-04-17 10:29:46.306029] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:13.170 [2024-04-17 10:29:46.306170] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:13.170 [2024-04-17 10:29:46.306182] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:13.170 [2024-04-17 10:29:46.306191] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:13.170 [2024-04-17 10:29:46.306310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:33:13.170 [2024-04-17 10:29:46.307664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:33:13.170 [2024-04-17 10:29:46.307756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:33:13.170 [2024-04-17 10:29:46.307757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:33:13.736 10:29:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:13.736 10:29:47 -- common/autotest_common.sh@852 -- # return 0 00:33:13.736 10:29:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:13.736 10:29:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:13.736 10:29:47 -- common/autotest_common.sh@10 -- # set +x 00:33:13.736 10:29:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:13.736 10:29:47 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:13.736 10:29:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:13.736 10:29:47 -- common/autotest_common.sh@10 -- # set +x 00:33:13.994 Malloc0 00:33:13.994 10:29:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:13.994 10:29:47 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:13.994 10:29:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:13.994 10:29:47 -- common/autotest_common.sh@10 -- # set +x 00:33:13.994 [2024-04-17 10:29:47.078280] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:13.994 10:29:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:13.994 10:29:47 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:13.994 10:29:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:13.994 10:29:47 -- common/autotest_common.sh@10 -- # set +x 00:33:13.994 10:29:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:13.994 10:29:47 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:13.994 10:29:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:13.994 10:29:47 -- common/autotest_common.sh@10 -- # set +x 00:33:13.994 10:29:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:13.994 10:29:47 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:13.994 10:29:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:13.994 10:29:47 -- common/autotest_common.sh@10 -- # set +x 00:33:13.994 [2024-04-17 10:29:47.106555] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:13.994 10:29:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:13.994 10:29:47 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:13.994 10:29:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:13.994 10:29:47 -- common/autotest_common.sh@10 -- # set +x 00:33:13.994 10:29:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:13.994 10:29:47 -- host/target_disconnect.sh@50 -- # reconnectpid=3656141 00:33:13.994 10:29:47 -- host/target_disconnect.sh@52 -- # sleep 2 00:33:13.994 10:29:47 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:13.994 EAL: No free 2048 kB hugepages reported on node 1 00:33:15.900 10:29:49 -- host/target_disconnect.sh@53 -- # kill -9 3655859 00:33:15.900 10:29:49 -- host/target_disconnect.sh@55 -- # sleep 2 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 [2024-04-17 10:29:49.135053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 [2024-04-17 10:29:49.135363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Write completed with error (sct=0, sc=8) 00:33:15.900 starting I/O failed 00:33:15.900 Read completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Write completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Write completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Read completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Read completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 [2024-04-17 10:29:49.135547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:15.901 Read completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Read completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Read completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Read completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Read completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Read completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Read completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Read completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Read completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Read completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Read completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Write completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Read completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Read completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Read completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Write completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Read completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Read completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Read completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Write completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Read completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Write completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Write completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Read completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Read completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Read completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Read completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Write completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Write completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Read completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Write completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 Write completed with error (sct=0, sc=8) 00:33:15.901 starting I/O failed 00:33:15.901 [2024-04-17 10:29:49.135838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:15.901 [2024-04-17 10:29:49.136126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.136424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.136458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2078000b90 with addr=10.0.0.2, port=4420 00:33:15.901 qpair failed and we were unable to recover it. 00:33:15.901 [2024-04-17 10:29:49.136694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.136915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.136945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2078000b90 with addr=10.0.0.2, port=4420 00:33:15.901 qpair failed and we were unable to recover it. 00:33:15.901 [2024-04-17 10:29:49.137111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.137359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.137374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2078000b90 with addr=10.0.0.2, port=4420 00:33:15.901 qpair failed and we were unable to recover it. 00:33:15.901 [2024-04-17 10:29:49.137532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.137743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.137774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2078000b90 with addr=10.0.0.2, port=4420 00:33:15.901 qpair failed and we were unable to recover it. 00:33:15.901 [2024-04-17 10:29:49.138031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.138234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.138264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2078000b90 with addr=10.0.0.2, port=4420 00:33:15.901 qpair failed and we were unable to recover it. 00:33:15.901 [2024-04-17 10:29:49.138544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.138800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.138840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2078000b90 with addr=10.0.0.2, port=4420 00:33:15.901 qpair failed and we were unable to recover it. 00:33:15.901 [2024-04-17 10:29:49.139081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.139325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.139356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2078000b90 with addr=10.0.0.2, port=4420 00:33:15.901 qpair failed and we were unable to recover it. 00:33:15.901 [2024-04-17 10:29:49.139574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.139806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.139833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2078000b90 with addr=10.0.0.2, port=4420 00:33:15.901 qpair failed and we were unable to recover it. 00:33:15.901 [2024-04-17 10:29:49.139991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.140167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.140178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.901 qpair failed and we were unable to recover it. 00:33:15.901 [2024-04-17 10:29:49.140496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.140663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.140695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.901 qpair failed and we were unable to recover it. 00:33:15.901 [2024-04-17 10:29:49.141036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.141186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.141215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.901 qpair failed and we were unable to recover it. 00:33:15.901 [2024-04-17 10:29:49.141524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.141743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.141753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.901 qpair failed and we were unable to recover it. 00:33:15.901 [2024-04-17 10:29:49.141936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.142208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.142238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.901 qpair failed and we were unable to recover it. 00:33:15.901 [2024-04-17 10:29:49.142534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.142872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.142903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.901 qpair failed and we were unable to recover it. 00:33:15.901 [2024-04-17 10:29:49.143055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.143225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.143254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.901 qpair failed and we were unable to recover it. 00:33:15.901 [2024-04-17 10:29:49.143459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.143683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.143716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.901 qpair failed and we were unable to recover it. 00:33:15.901 [2024-04-17 10:29:49.143888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.144157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.144188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.901 qpair failed and we were unable to recover it. 00:33:15.901 [2024-04-17 10:29:49.144426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.144726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.144737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.901 qpair failed and we were unable to recover it. 00:33:15.901 [2024-04-17 10:29:49.145024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.145295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.901 [2024-04-17 10:29:49.145325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.145497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.145798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.145829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.146119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.146462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.146491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.146817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.147089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.147119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.147474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.147771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.147803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.148134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.148344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.148374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.148686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.148937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.148967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.149199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.149420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.149451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.149701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.149955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.149965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.150194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.150443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.150453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.150618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.150742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.150753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.150996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.151177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.151187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.151402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.151602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.151612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.151905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.152022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.152033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.152197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.152398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.152408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.152577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.152736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.152747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.152917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.153038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.153049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.153340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.153459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.153469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.153731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.153839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.153850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.154011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.154206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.154216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.154476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.154672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.154702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.155005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.155251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.155281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.155582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.155879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.155910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.156142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.156365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.156395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.156614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.156923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.156954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.157183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.157387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.157416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.157656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.157875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.157906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.158183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.158495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.902 [2024-04-17 10:29:49.158504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.902 qpair failed and we were unable to recover it. 00:33:15.902 [2024-04-17 10:29:49.158665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.158793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.158803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.903 qpair failed and we were unable to recover it. 00:33:15.903 [2024-04-17 10:29:49.159033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.159325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.159355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.903 qpair failed and we were unable to recover it. 00:33:15.903 [2024-04-17 10:29:49.159578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.159825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.159860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.903 qpair failed and we were unable to recover it. 00:33:15.903 [2024-04-17 10:29:49.160055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.160337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.160348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.903 qpair failed and we were unable to recover it. 00:33:15.903 [2024-04-17 10:29:49.160554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.160851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.160882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.903 qpair failed and we were unable to recover it. 00:33:15.903 [2024-04-17 10:29:49.161132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.161346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.161376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.903 qpair failed and we were unable to recover it. 00:33:15.903 [2024-04-17 10:29:49.161663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.161936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.161946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.903 qpair failed and we were unable to recover it. 00:33:15.903 [2024-04-17 10:29:49.162233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.162534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.162563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.903 qpair failed and we were unable to recover it. 00:33:15.903 [2024-04-17 10:29:49.162897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.163127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.163157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.903 qpair failed and we were unable to recover it. 00:33:15.903 [2024-04-17 10:29:49.163366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.163602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.163632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.903 qpair failed and we were unable to recover it. 00:33:15.903 [2024-04-17 10:29:49.163854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.164097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.164127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.903 qpair failed and we were unable to recover it. 00:33:15.903 [2024-04-17 10:29:49.164344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.164680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.164712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.903 qpair failed and we were unable to recover it. 00:33:15.903 [2024-04-17 10:29:49.164923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.165220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.165250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.903 qpair failed and we were unable to recover it. 00:33:15.903 [2024-04-17 10:29:49.165560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.165880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.165891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.903 qpair failed and we were unable to recover it. 00:33:15.903 [2024-04-17 10:29:49.166109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.166364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.166374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.903 qpair failed and we were unable to recover it. 00:33:15.903 [2024-04-17 10:29:49.166581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.166742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.166753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.903 qpair failed and we were unable to recover it. 00:33:15.903 [2024-04-17 10:29:49.167037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.167359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.167390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.903 qpair failed and we were unable to recover it. 00:33:15.903 [2024-04-17 10:29:49.167669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.167884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.167915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.903 qpair failed and we were unable to recover it. 00:33:15.903 [2024-04-17 10:29:49.168210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.168492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.168522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.903 qpair failed and we were unable to recover it. 00:33:15.903 [2024-04-17 10:29:49.168679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.168905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.168936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.903 qpair failed and we were unable to recover it. 00:33:15.903 [2024-04-17 10:29:49.169243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.169463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.169473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.903 qpair failed and we were unable to recover it. 00:33:15.903 [2024-04-17 10:29:49.169752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.169970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.170000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.903 qpair failed and we were unable to recover it. 00:33:15.903 [2024-04-17 10:29:49.170309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.170540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.170571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.903 qpair failed and we were unable to recover it. 00:33:15.903 [2024-04-17 10:29:49.170901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.171196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.171227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.903 qpair failed and we were unable to recover it. 00:33:15.903 [2024-04-17 10:29:49.171412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.171630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.171640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.903 qpair failed and we were unable to recover it. 00:33:15.903 [2024-04-17 10:29:49.171807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.171988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.172000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.903 qpair failed and we were unable to recover it. 00:33:15.903 [2024-04-17 10:29:49.172301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.172503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.903 [2024-04-17 10:29:49.172533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.903 qpair failed and we were unable to recover it. 00:33:15.903 [2024-04-17 10:29:49.172754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.172971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.173001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.904 qpair failed and we were unable to recover it. 00:33:15.904 [2024-04-17 10:29:49.173306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.173554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.173564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.904 qpair failed and we were unable to recover it. 00:33:15.904 [2024-04-17 10:29:49.173815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.173945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.173975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.904 qpair failed and we were unable to recover it. 00:33:15.904 [2024-04-17 10:29:49.174183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.174454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.174484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.904 qpair failed and we were unable to recover it. 00:33:15.904 [2024-04-17 10:29:49.174786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.174923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.174952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.904 qpair failed and we were unable to recover it. 00:33:15.904 [2024-04-17 10:29:49.175175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.175475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.175505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.904 qpair failed and we were unable to recover it. 00:33:15.904 [2024-04-17 10:29:49.175730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.176030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.176060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.904 qpair failed and we were unable to recover it. 00:33:15.904 [2024-04-17 10:29:49.176270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.176515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.176545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.904 qpair failed and we were unable to recover it. 00:33:15.904 [2024-04-17 10:29:49.176876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.177050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.177060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.904 qpair failed and we were unable to recover it. 00:33:15.904 [2024-04-17 10:29:49.177318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.177582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.177611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.904 qpair failed and we were unable to recover it. 00:33:15.904 [2024-04-17 10:29:49.177924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.178209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.178240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.904 qpair failed and we were unable to recover it. 00:33:15.904 [2024-04-17 10:29:49.178489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.178698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.178708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.904 qpair failed and we were unable to recover it. 00:33:15.904 [2024-04-17 10:29:49.178983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.179265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.179296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.904 qpair failed and we were unable to recover it. 00:33:15.904 [2024-04-17 10:29:49.179551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.179702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.179733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.904 qpair failed and we were unable to recover it. 00:33:15.904 [2024-04-17 10:29:49.180047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.180252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.180282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.904 qpair failed and we were unable to recover it. 00:33:15.904 [2024-04-17 10:29:49.180562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.180832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.180869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.904 qpair failed and we were unable to recover it. 00:33:15.904 [2024-04-17 10:29:49.181168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.181497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.181527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.904 qpair failed and we were unable to recover it. 00:33:15.904 [2024-04-17 10:29:49.181764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.182069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.182099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.904 qpair failed and we were unable to recover it. 00:33:15.904 [2024-04-17 10:29:49.182406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.182565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.182595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.904 qpair failed and we were unable to recover it. 00:33:15.904 [2024-04-17 10:29:49.182906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.183076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.183106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.904 qpair failed and we were unable to recover it. 00:33:15.904 [2024-04-17 10:29:49.183356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.183627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.183667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.904 qpair failed and we were unable to recover it. 00:33:15.904 [2024-04-17 10:29:49.183979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.184267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.184298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.904 qpair failed and we were unable to recover it. 00:33:15.904 [2024-04-17 10:29:49.184631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.184979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.185011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.904 qpair failed and we were unable to recover it. 00:33:15.904 [2024-04-17 10:29:49.185285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.185549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.904 [2024-04-17 10:29:49.185579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.904 qpair failed and we were unable to recover it. 00:33:15.904 [2024-04-17 10:29:49.185750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.186087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.186117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.905 qpair failed and we were unable to recover it. 00:33:15.905 [2024-04-17 10:29:49.186326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.186628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.186673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.905 qpair failed and we were unable to recover it. 00:33:15.905 [2024-04-17 10:29:49.186887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.187188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.187219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.905 qpair failed and we were unable to recover it. 00:33:15.905 [2024-04-17 10:29:49.187391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.187632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.187671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.905 qpair failed and we were unable to recover it. 00:33:15.905 [2024-04-17 10:29:49.187980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.188280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.188311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.905 qpair failed and we were unable to recover it. 00:33:15.905 [2024-04-17 10:29:49.188642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.188861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.188890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.905 qpair failed and we were unable to recover it. 00:33:15.905 [2024-04-17 10:29:49.189195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.189396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.189427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.905 qpair failed and we were unable to recover it. 00:33:15.905 [2024-04-17 10:29:49.189639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.189947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.189978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.905 qpair failed and we were unable to recover it. 00:33:15.905 [2024-04-17 10:29:49.190194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.190372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.190403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.905 qpair failed and we were unable to recover it. 00:33:15.905 [2024-04-17 10:29:49.190699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.190945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.190975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.905 qpair failed and we were unable to recover it. 00:33:15.905 [2024-04-17 10:29:49.191283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.191483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.191514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.905 qpair failed and we were unable to recover it. 00:33:15.905 [2024-04-17 10:29:49.191777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.191989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.192025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.905 qpair failed and we were unable to recover it. 00:33:15.905 [2024-04-17 10:29:49.192333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.192633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.192673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.905 qpair failed and we were unable to recover it. 00:33:15.905 [2024-04-17 10:29:49.192955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.193281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.193311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.905 qpair failed and we were unable to recover it. 00:33:15.905 [2024-04-17 10:29:49.193529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.193801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.905 [2024-04-17 10:29:49.193833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.905 qpair failed and we were unable to recover it. 00:33:15.906 [2024-04-17 10:29:49.194137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.194470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.194501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.906 qpair failed and we were unable to recover it. 00:33:15.906 [2024-04-17 10:29:49.194796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.194925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.194955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.906 qpair failed and we were unable to recover it. 00:33:15.906 [2024-04-17 10:29:49.195235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.195521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.195551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.906 qpair failed and we were unable to recover it. 00:33:15.906 [2024-04-17 10:29:49.195774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.196015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.196045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.906 qpair failed and we were unable to recover it. 00:33:15.906 [2024-04-17 10:29:49.196358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.196677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.196710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.906 qpair failed and we were unable to recover it. 00:33:15.906 [2024-04-17 10:29:49.196865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.196995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.197024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.906 qpair failed and we were unable to recover it. 00:33:15.906 [2024-04-17 10:29:49.197301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.197505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.197539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.906 qpair failed and we were unable to recover it. 00:33:15.906 [2024-04-17 10:29:49.197840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.197969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.197999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.906 qpair failed and we were unable to recover it. 00:33:15.906 [2024-04-17 10:29:49.198209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.198456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.198487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.906 qpair failed and we were unable to recover it. 00:33:15.906 [2024-04-17 10:29:49.198734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.198958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.198988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.906 qpair failed and we were unable to recover it. 00:33:15.906 [2024-04-17 10:29:49.199134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.199439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.199469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.906 qpair failed and we were unable to recover it. 00:33:15.906 [2024-04-17 10:29:49.199699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.199941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.199952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.906 qpair failed and we were unable to recover it. 00:33:15.906 [2024-04-17 10:29:49.200053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.200293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.200324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.906 qpair failed and we were unable to recover it. 00:33:15.906 [2024-04-17 10:29:49.200628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.200944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.200982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.906 qpair failed and we were unable to recover it. 00:33:15.906 [2024-04-17 10:29:49.201260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.201603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.201633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.906 qpair failed and we were unable to recover it. 00:33:15.906 [2024-04-17 10:29:49.201927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.202061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.202091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.906 qpair failed and we were unable to recover it. 00:33:15.906 [2024-04-17 10:29:49.202400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.202661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.202671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.906 qpair failed and we were unable to recover it. 00:33:15.906 [2024-04-17 10:29:49.202915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.203206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.203236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.906 qpair failed and we were unable to recover it. 00:33:15.906 [2024-04-17 10:29:49.203564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.203796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.906 [2024-04-17 10:29:49.203807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.906 qpair failed and we were unable to recover it. 00:33:15.906 [2024-04-17 10:29:49.204075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.204276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.204307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.907 qpair failed and we were unable to recover it. 00:33:15.907 [2024-04-17 10:29:49.204623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.204871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.204903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.907 qpair failed and we were unable to recover it. 00:33:15.907 [2024-04-17 10:29:49.205204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.205474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.205504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.907 qpair failed and we were unable to recover it. 00:33:15.907 [2024-04-17 10:29:49.205760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.205968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.205998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.907 qpair failed and we were unable to recover it. 00:33:15.907 [2024-04-17 10:29:49.206330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.206632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.206674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.907 qpair failed and we were unable to recover it. 00:33:15.907 [2024-04-17 10:29:49.206929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.207135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.207166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.907 qpair failed and we were unable to recover it. 00:33:15.907 [2024-04-17 10:29:49.207445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.207746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.207757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.907 qpair failed and we were unable to recover it. 00:33:15.907 [2024-04-17 10:29:49.207922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.208193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.208223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.907 qpair failed and we were unable to recover it. 00:33:15.907 [2024-04-17 10:29:49.208380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.208608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.208639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.907 qpair failed and we were unable to recover it. 00:33:15.907 [2024-04-17 10:29:49.208797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.209046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.209077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.907 qpair failed and we were unable to recover it. 00:33:15.907 [2024-04-17 10:29:49.209366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.209634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.209675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.907 qpair failed and we were unable to recover it. 00:33:15.907 [2024-04-17 10:29:49.209981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.210301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.210331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.907 qpair failed and we were unable to recover it. 00:33:15.907 [2024-04-17 10:29:49.210557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.210853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.210863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.907 qpair failed and we were unable to recover it. 00:33:15.907 [2024-04-17 10:29:49.211099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.211271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.211302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.907 qpair failed and we were unable to recover it. 00:33:15.907 [2024-04-17 10:29:49.211634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.211934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.211982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.907 qpair failed and we were unable to recover it. 00:33:15.907 [2024-04-17 10:29:49.212273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.212440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.212471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.907 qpair failed and we were unable to recover it. 00:33:15.907 [2024-04-17 10:29:49.212689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.212908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.212939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.907 qpair failed and we were unable to recover it. 00:33:15.907 [2024-04-17 10:29:49.213252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.213480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.213510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.907 qpair failed and we were unable to recover it. 00:33:15.907 [2024-04-17 10:29:49.213866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.214138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.907 [2024-04-17 10:29:49.214169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.907 qpair failed and we were unable to recover it. 00:33:15.907 [2024-04-17 10:29:49.214557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.214855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.214887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.908 qpair failed and we were unable to recover it. 00:33:15.908 [2024-04-17 10:29:49.215098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.215318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.215349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.908 qpair failed and we were unable to recover it. 00:33:15.908 [2024-04-17 10:29:49.215663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.215894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.215904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.908 qpair failed and we were unable to recover it. 00:33:15.908 [2024-04-17 10:29:49.216206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.216459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.216496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.908 qpair failed and we were unable to recover it. 00:33:15.908 [2024-04-17 10:29:49.216675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.216941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.216971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.908 qpair failed and we were unable to recover it. 00:33:15.908 [2024-04-17 10:29:49.217197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.217473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.217503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.908 qpair failed and we were unable to recover it. 00:33:15.908 [2024-04-17 10:29:49.217818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.218100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.218130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.908 qpair failed and we were unable to recover it. 00:33:15.908 [2024-04-17 10:29:49.218423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.218721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.218753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.908 qpair failed and we were unable to recover it. 00:33:15.908 [2024-04-17 10:29:49.219005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.219295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.219326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.908 qpair failed and we were unable to recover it. 00:33:15.908 [2024-04-17 10:29:49.219543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.219849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.219881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.908 qpair failed and we were unable to recover it. 00:33:15.908 [2024-04-17 10:29:49.220189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.220345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.220376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.908 qpair failed and we were unable to recover it. 00:33:15.908 [2024-04-17 10:29:49.220689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.221027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.221058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.908 qpair failed and we were unable to recover it. 00:33:15.908 [2024-04-17 10:29:49.221358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.221678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.221709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.908 qpair failed and we were unable to recover it. 00:33:15.908 [2024-04-17 10:29:49.221949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.222222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.222252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.908 qpair failed and we were unable to recover it. 00:33:15.908 [2024-04-17 10:29:49.222462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.222765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.222796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.908 qpair failed and we were unable to recover it. 00:33:15.908 [2024-04-17 10:29:49.223138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.223434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.223464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.908 qpair failed and we were unable to recover it. 00:33:15.908 [2024-04-17 10:29:49.223707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.224039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.224070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.908 qpair failed and we were unable to recover it. 00:33:15.908 [2024-04-17 10:29:49.224276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.224486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.224517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.908 qpair failed and we were unable to recover it. 00:33:15.908 [2024-04-17 10:29:49.224812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.225020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.225051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.908 qpair failed and we were unable to recover it. 00:33:15.908 [2024-04-17 10:29:49.225326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.908 [2024-04-17 10:29:49.225599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.909 [2024-04-17 10:29:49.225630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.909 qpair failed and we were unable to recover it. 00:33:15.909 [2024-04-17 10:29:49.225870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.909 [2024-04-17 10:29:49.226134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.909 [2024-04-17 10:29:49.226145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.909 qpair failed and we were unable to recover it. 00:33:15.909 [2024-04-17 10:29:49.226390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.909 [2024-04-17 10:29:49.226675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.909 [2024-04-17 10:29:49.226709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.909 qpair failed and we were unable to recover it. 00:33:15.909 [2024-04-17 10:29:49.227020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.909 [2024-04-17 10:29:49.227225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.909 [2024-04-17 10:29:49.227256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.909 qpair failed and we were unable to recover it. 00:33:15.909 [2024-04-17 10:29:49.227565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.909 [2024-04-17 10:29:49.227776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.909 [2024-04-17 10:29:49.227807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:15.909 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.228135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.228250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.228272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.228458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.228641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.228658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.228916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.229099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.229110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.229376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.229667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.229698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.229918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.230225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.230255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.230540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.230822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.230854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.231188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.231493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.231524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.231856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.232111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.232141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.232430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.232746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.232778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.232990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.233320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.233351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.233560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.233779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.233811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.234126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.234415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.234446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.234663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.234958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.234989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.235215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.235507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.235550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.235723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.235984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.236015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.236163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.236466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.236497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.236734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.236986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.237017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.237327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.237550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.237586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.237842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.238086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.238118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.238359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.238630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.238641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.238932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.239233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.239264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.239601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.239858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.239891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.240171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.240454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.240485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.240696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.241026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.241057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.241362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.241675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.241707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.241947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.242254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.242285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.242571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.242900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.242911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.243175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.243340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.243351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.243618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.243884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.243907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.244082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.244409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.244441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.244662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.244922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.244953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.245265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.245499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.245531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.245719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.245991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.246022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.246253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.246496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.246535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.246703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.246973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.247003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.247161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.247465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.247497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.247719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.247937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.247967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.248194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.248349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.248379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.248694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.248899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.248910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.179 [2024-04-17 10:29:49.249196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.249502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.179 [2024-04-17 10:29:49.249532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.179 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.249765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.250074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.250104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.250339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.250615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.250668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.250821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.251055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.251086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.251399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.251614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.251653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.251912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.252135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.252166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.252454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.252690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.252723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.253024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.253299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.253330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.253685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.254010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.254040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.254354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.254673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.254706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.255029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.255333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.255365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.255601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.255834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.255867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.256071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.256272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.256303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.256585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.256728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.256761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.257000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.257224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.257254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.257513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.257692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.257704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.257927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.258211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.258243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.258462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.258684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.258716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.258931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.259082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.259114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.259345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.259580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.259611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.259861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.260089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.260120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.260358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.260686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.260717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.260934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.261212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.261245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.261550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.261717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.261750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.262068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.262275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.262306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.262595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.262948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.262981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.263270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.263602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.263633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.263942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.264255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.264287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.264601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.264779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.264812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.265032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.265308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.265339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.265568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.265801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.265813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.266110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.266320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.266352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.266578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.266830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.266842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.266991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.267187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.267218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.267548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.267720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.267754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.268065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.268405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.268436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.268760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.268922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.268958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.269137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.269351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.269383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.269700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.269893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.269925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.270161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.270321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.270353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.270676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.270992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.271023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.271377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.271669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.271701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.271927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.272211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.272242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.272427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.272636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.272677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.272988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.273273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.273304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.273562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.273833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.273846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.274038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.274236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.274273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.274623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.274876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.274887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.275006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.275256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.275287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.275532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.275826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.275860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.276201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.276411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.276444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.276772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.276962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.277008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.277322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.277641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.277689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.180 [2024-04-17 10:29:49.277834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.278146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.180 [2024-04-17 10:29:49.278177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.180 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.278511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.278845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.278858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.279067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.279354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.279386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.279705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.280055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.280093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.280330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.280572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.280604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.280844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.281152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.281184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.281516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.281693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.281726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.281952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.282097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.282128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.282419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.282631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.282675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.282822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.283068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.283099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.283349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.283581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.283613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.283825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.284141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.284173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.284411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.284674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.284707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.284952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.285237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.285269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.285523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.285811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.285844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.286058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.286260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.286291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.286450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.286592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.286622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.286931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.287167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.287197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.287423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.287710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.287743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.288009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.288364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.288394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.288662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.288898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.288930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.289234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.289473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.289505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.289744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.289962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.289973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.290174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.290452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.290485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.290756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.290973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.290985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.291172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.291375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.291407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.291636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.291882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.291914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.292133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.292256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.292287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.292506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.292669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.292702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.292953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.293140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.293171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.293390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.293734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.293767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.294094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.294265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.294297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.294535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.294758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.294770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.295051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.295400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.295432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.295662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.295913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.295945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.296184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.296406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.296437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.296601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.296887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.296920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.297234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.297354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.297367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.297551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.297800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.297833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.298086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.298346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.298377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.298694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.298853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.298884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.299110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.299330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.299363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.299683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.299944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.299976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.300270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.300517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.300549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.300809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.301127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.301139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.301334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.301458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.301470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.301584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.301724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.301737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.181 qpair failed and we were unable to recover it. 00:33:16.181 [2024-04-17 10:29:49.301931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.302098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.181 [2024-04-17 10:29:49.302130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.302311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.302607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.302639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.302910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.303075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.303088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.303222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.303381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.303412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.303632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.303777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.303789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.303904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.304091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.304122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.304412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.304733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.304766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.304959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.305265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.305295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.306092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.306366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.306379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.306503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.306776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.306788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.307046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.307290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.307302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.307495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.307682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.307695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.307883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.308156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.308168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.308323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.308499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.308511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.308780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.308916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.308929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.309136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.309366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.309397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.309539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.309775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.309787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.309977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.310287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.310318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.310488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.310726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.310759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.311023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.311184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.311215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.311369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.311592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.311603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.311808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.312028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.312059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.312208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.312416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.312448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.312607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.312784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.312817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.313034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.313216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.313247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.313566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.313884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.313917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.314161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.314398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.314429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.314775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.314985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.315017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.315178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.315340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.315371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.315597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.315890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.315922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.316178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.316450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.316481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.316667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.316969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.317009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.317178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.317480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.317511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.317747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.317996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.318027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.318243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.318480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.318512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.318662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.318820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.318851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.319015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.319323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.319353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.319664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.319881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.319893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.320176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.320498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.320530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.320857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.321138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.321169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.321426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.321637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.321688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.321942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.322168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.322200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.322418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.322733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.322759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.323045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.323338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.323368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.323686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.323901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.323933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.324205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.324349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.324380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.324609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.324958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.324990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.325235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.325547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.325559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.325752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.325999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.326011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.326227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.326440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.326452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.326654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.326793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.326824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.327040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.327285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.327317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.327634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.327862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.327894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.182 qpair failed and we were unable to recover it. 00:33:16.182 [2024-04-17 10:29:49.328204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.328521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.182 [2024-04-17 10:29:49.328552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.328869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.329153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.329185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.329446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.329726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.329738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.329915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.330167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.330199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.330447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.330681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.330714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.330926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.331052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.331084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.331312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.331566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.331597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.331881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.332164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.332195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.332489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.332716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.332748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.332900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.333134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.333166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.333439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.333621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.333663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.333930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.334209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.334221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.334403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.334587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.334598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.334833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.335049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.335081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.335307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.335611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.335653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.335884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.336099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.336130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.336359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.336576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.336610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.336940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.337196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.337228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.337521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.337743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.337786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.338024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.338272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.338284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.338455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.338668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.338702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.338931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.339124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.339156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.339366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.339634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.339678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.339892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.340153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.340184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.340412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.340658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.340693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.340889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.341068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.341100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.341419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.342051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.342069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.342381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.342723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.342760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.342951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.343132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.343164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.343447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.343735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.343768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.344033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.344278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.344309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.344526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.344848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.344880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.345060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.345297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.345329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.345562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.345859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.345893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.346073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.346364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.346378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.346603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.346820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.346856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.347035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.347218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.347249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.347418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.347740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.347774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.348124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.348466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.348500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.348724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.349016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.349047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.349270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.349502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.349533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.349865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.350093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.350124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.350314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.350603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.350635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.350881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.351117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.351149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.351417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.351659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.351698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.351946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.352165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.352197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.352498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.352737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.352770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.352981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.353132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.353164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.353485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.353771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.353804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.354123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.354390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.354421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.354668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.354826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.354858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.355090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.355332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.355343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.355598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.355732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.355745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.355932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.356145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.183 [2024-04-17 10:29:49.356177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.183 qpair failed and we were unable to recover it. 00:33:16.183 [2024-04-17 10:29:49.356424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.356713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.356751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.356887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.357023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.357066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.357306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.357558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.357592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.357866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.358043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.358075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.358265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.358471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.358502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.358832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.359052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.359082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.359266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.359480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.359512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.359818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.360000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.360035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.360224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.360326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.360337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.360526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.360814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.360853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.361040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.361381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.361421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.361728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.361963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.361975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.362116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.362319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.362351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.362514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.362732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.362766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.362937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.363211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.363242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.363560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.363886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.363919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.364189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.364450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.364481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.364729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.364907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.364939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.365181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.365410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.365442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.365683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.365866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.365878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.366074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.366282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.366313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.366553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.366874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.366907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.367167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.367360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.367391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.367634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.367875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.367888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.368085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.368293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.368328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.368675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.368905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.368937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.369174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.369416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.369447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.369681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.369899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.369930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.370188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.370509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.370539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.370810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.370973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.371004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.371243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.371523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.371555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.371908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.372136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.372166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.372422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.372677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.372711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.372969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.373258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.373290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.373537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.373804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.373837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.373987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.374323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.374355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.374576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.374800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.374832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.375051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.375343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.375374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.375519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.375842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.375874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.376067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.376287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.376319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.376609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.376910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.376943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.377265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.377514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.377527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.377809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.378037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.378049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.378319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.378617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.378659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.378899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.379212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.379243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.379533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.379746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.379780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.380014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.380173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.380205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.380558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.380742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.380755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.184 qpair failed and we were unable to recover it. 00:33:16.184 [2024-04-17 10:29:49.380884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.381081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.184 [2024-04-17 10:29:49.381112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.381330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.381615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.381672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.381966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.382111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.382142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.382499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.382754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.382788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.383021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.383178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.383209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.383531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.383826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.383857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.384241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.384527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.384558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.384849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.385102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.385134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.385532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.385847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.385879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.386049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.386295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.386306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.386606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.386859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.386891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.387056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.387196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.387227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.387628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.387873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.387906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.388153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.388379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.388411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.389092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.389357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.389369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.389664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.389856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.389868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.390121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.390377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.390409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.390680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.390918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.390949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.391251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.391415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.391446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.391683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.391974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.392005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.392229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.392559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.392575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.392846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.393084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.393100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.393311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.393529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.393562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.393911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.394071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.394105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.394415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.394657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.394690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.394933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.395375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.395390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.395507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.395763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.395776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.396001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.396294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.396307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.396565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.396882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.396915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.397144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.397402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.397433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.397776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.398352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.398371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.398581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.398841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.398853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.399165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.399442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.399454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.399948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.400081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.400093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.400378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.400597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.400608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.400807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.400987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.400999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.401265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.401500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.401531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.401850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.402035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.402066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.402316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.402592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.402630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.402903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.403120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.403132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.403446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.403680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.403693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.403797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.403997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.404009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.404227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.404556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.404568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.404730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.404908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.404920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.405137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.405278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.405289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.405478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.405738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.405750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.405984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.406123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.406135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.406391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.406507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.406531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.406740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.406931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.406942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.407223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.407471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.407483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.407668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.407839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.407852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.408100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.408277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.408289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.408532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.408717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.408728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.408866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.409014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.409026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.185 [2024-04-17 10:29:49.409167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.409331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.185 [2024-04-17 10:29:49.409363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.185 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.409599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.409781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.409813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.410059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.410266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.410277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.410478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.410804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.410817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.410935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.411193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.411225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.411448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.411670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.411704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.411878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.412108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.412139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.412831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.413067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.413080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.413285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.413571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.413605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.413918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.414164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.414196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.414460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.414670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.414702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.414960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.415193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.415204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.415511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.415702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.415715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.415820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.416010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.416022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.416179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.416408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.416420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.417024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.417333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.417347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.417619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.417739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.417752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.418009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.418309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.418321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.418444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.418725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.418737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.418945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.419490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.419507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.419809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.420004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.420017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.420175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.420375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.420386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.420708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.420818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.420828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.421044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.421185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.421197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.421480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.421641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.421684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.421999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.422201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.422232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.422418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.422713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.422745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.422982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.423268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.423299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.423585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.423790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.423822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.424123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.424431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.424463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.424765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.425027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.425058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.425306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.425576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.425588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.425766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.425958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.425969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.426110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.426250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.426262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.426457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.426751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.426767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.426984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.427170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.427181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.427377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.428721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.428749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.428923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.429166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.429178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.429428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.429695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.429728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.429973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.430129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.430143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.430336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.430523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.430534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.430748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.430970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.431001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.431249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.432564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.432588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.432880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.433073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.433084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.433375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.433618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.433659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.433992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.434112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.434124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.434389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.434606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.434617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.434850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.434980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.434991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.186 qpair failed and we were unable to recover it. 00:33:16.186 [2024-04-17 10:29:49.435306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.435544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.186 [2024-04-17 10:29:49.435575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.435804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.436103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.436142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.436412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.436689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.436701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.436997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.437346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.437358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.437655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.437950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.437981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.438147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.438457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.438489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.438703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.438981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.438992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.439232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.439428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.439439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.439671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.439892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.439923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.440147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.440324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.440335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.440577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.440839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.440851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.440987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.441135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.441149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.441330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.441518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.441530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.441660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.441847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.441878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.442059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.442225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.442256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.442551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.442769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.442780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.443057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.443352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.443383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.443611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.443853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.443866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.444113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.444320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.444332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.444604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.444796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.444808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.445037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.445243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.445274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.445499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.445781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.445820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.446132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.446412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.446442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.446667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.446901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.446932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.447161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.447389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.447419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.447677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.447892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.447923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.448242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.448487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.448498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.448685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.448866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.448877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.449154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.449343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.449356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.449564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.450920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.450945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.451235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.451519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.451530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.451810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.451952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.451964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.452166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.452359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.452370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.452586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.452900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.452913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.453074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.453237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.453249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.453527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.454860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.454884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.455100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.455376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.455399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.455642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.455855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.455867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.456071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.456313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.456355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.456608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.456905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.456938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.457094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.457334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.457347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.457532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.457849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.457863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.458124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.458266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.458278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.458552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.458755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.458767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.459018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.459191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.459203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.459432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.459762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.459801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.460132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.460389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.460419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.460640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.460985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.461020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.461334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.461548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.461584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.461712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.461913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.461948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.462152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.462510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.462547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.462791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.463037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.463072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.463315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.463551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.463582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.463838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.464049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.464083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.187 [2024-04-17 10:29:49.464417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.464598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.187 [2024-04-17 10:29:49.464629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.187 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.464883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.465053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.465088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.465391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.465594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.465625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.465817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.465959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.465993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.466140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.466360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.466372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.466508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.466726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.466738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.466929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.467148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.467159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.467376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.467589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.467601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.467782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.468021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.468034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.468244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.468558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.468570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.468790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.468926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.468937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.469191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.469471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.469483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.469617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.469906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.469940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.470107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.470378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.470389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.470635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.470830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.470861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.471118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.471292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.471304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.471475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.471656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.471669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.471937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.472052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.472064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.472179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.472319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.472330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.472557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.472704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.472729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.472858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.473128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.473140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.473275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.473513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.473543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.473889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.474208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.474239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.474485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.474747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.474779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.475002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.475237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.475267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.475512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.475687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.475701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.475945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.476145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.476177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.476349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.476627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.476673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.478048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.478285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.478297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.478586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.478848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.478861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.479036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.479326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.479338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.479540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.479724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.479737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.480021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.480218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.480230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.480499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.480653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.480665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.480872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.481174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.481185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.481492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.481739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.481772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.482068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.482228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.482240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.482439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.482598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.482609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.482927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.483208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.483220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.483509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.483704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.483737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.484007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.484253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.484285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.484516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.484717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.484729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.484942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.485972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.485999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.486298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.486597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.486608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.486759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.486955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.486967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.487168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.487338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.487368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.487585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.487781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.487814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.488104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.488415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.488427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.488755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.488971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.489010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.489248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.489456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.489470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.489724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.489974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.489986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.490186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.490458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.490471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.490701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.490910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.490942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.491288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.491522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.491552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.188 qpair failed and we were unable to recover it. 00:33:16.188 [2024-04-17 10:29:49.491794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.492026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.188 [2024-04-17 10:29:49.492058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.189 qpair failed and we were unable to recover it. 00:33:16.189 [2024-04-17 10:29:49.492213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.492492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.492523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.189 qpair failed and we were unable to recover it. 00:33:16.189 [2024-04-17 10:29:49.492763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.492994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.493025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.189 qpair failed and we were unable to recover it. 00:33:16.189 [2024-04-17 10:29:49.493316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.493552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.493583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.189 qpair failed and we were unable to recover it. 00:33:16.189 [2024-04-17 10:29:49.493849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.494098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.494130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.189 qpair failed and we were unable to recover it. 00:33:16.189 [2024-04-17 10:29:49.494344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.494566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.494597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.189 qpair failed and we were unable to recover it. 00:33:16.189 [2024-04-17 10:29:49.494805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.495136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.495169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.189 qpair failed and we were unable to recover it. 00:33:16.189 [2024-04-17 10:29:49.495474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.495779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.495791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.189 qpair failed and we were unable to recover it. 00:33:16.189 [2024-04-17 10:29:49.495976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.496099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.496112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.189 qpair failed and we were unable to recover it. 00:33:16.189 [2024-04-17 10:29:49.496309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.496581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.496594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.189 qpair failed and we were unable to recover it. 00:33:16.189 [2024-04-17 10:29:49.496729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.497054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.497086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.189 qpair failed and we were unable to recover it. 00:33:16.189 [2024-04-17 10:29:49.497335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.497652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.497686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.189 qpair failed and we were unable to recover it. 00:33:16.189 [2024-04-17 10:29:49.497948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.498235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.498266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.189 qpair failed and we were unable to recover it. 00:33:16.189 [2024-04-17 10:29:49.498603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.498802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.498834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.189 qpair failed and we were unable to recover it. 00:33:16.189 [2024-04-17 10:29:49.499130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.499523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.499554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.189 qpair failed and we were unable to recover it. 00:33:16.189 [2024-04-17 10:29:49.499825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.500040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.189 [2024-04-17 10:29:49.500052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.189 qpair failed and we were unable to recover it. 00:33:16.189 [2024-04-17 10:29:49.500356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.500529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.500542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.461 qpair failed and we were unable to recover it. 00:33:16.461 [2024-04-17 10:29:49.500748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.500874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.500886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.461 qpair failed and we were unable to recover it. 00:33:16.461 [2024-04-17 10:29:49.501005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.501328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.501340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.461 qpair failed and we were unable to recover it. 00:33:16.461 [2024-04-17 10:29:49.501530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.501812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.501824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.461 qpair failed and we were unable to recover it. 00:33:16.461 [2024-04-17 10:29:49.502114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.502395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.502406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.461 qpair failed and we were unable to recover it. 00:33:16.461 [2024-04-17 10:29:49.502522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.502774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.502787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.461 qpair failed and we were unable to recover it. 00:33:16.461 [2024-04-17 10:29:49.502971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.503191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.503221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.461 qpair failed and we were unable to recover it. 00:33:16.461 [2024-04-17 10:29:49.503586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.503926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.503959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.461 qpair failed and we were unable to recover it. 00:33:16.461 [2024-04-17 10:29:49.504267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.504590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.504622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.461 qpair failed and we were unable to recover it. 00:33:16.461 [2024-04-17 10:29:49.504939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.505163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.505194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.461 qpair failed and we were unable to recover it. 00:33:16.461 [2024-04-17 10:29:49.505460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.505763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.505795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.461 qpair failed and we were unable to recover it. 00:33:16.461 [2024-04-17 10:29:49.506031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.506212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.506243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.461 qpair failed and we were unable to recover it. 00:33:16.461 [2024-04-17 10:29:49.506482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.506822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.506853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.461 qpair failed and we were unable to recover it. 00:33:16.461 [2024-04-17 10:29:49.507162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.507432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.507464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.461 qpair failed and we were unable to recover it. 00:33:16.461 [2024-04-17 10:29:49.507625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.507856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.507891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.461 qpair failed and we were unable to recover it. 00:33:16.461 [2024-04-17 10:29:49.508062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.508387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.461 [2024-04-17 10:29:49.508418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.508664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.508902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.508934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.509167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.509438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.509470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.509707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.509938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.509971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.510302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.510559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.510590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.510884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.511062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.511093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.511319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.511533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.511564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.511815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.512073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.512104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.512341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.512733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.512765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.512943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.513102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.513133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.513390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.513566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.513597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.513844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.514013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.514044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.514267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.514486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.514522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.514701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.514813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.514826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.515073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.515280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.515303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.515510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.515732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.515765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.516008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.516313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.516344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.516706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.516969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.517000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.517229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.517539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.517580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.517840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.518068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.518080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.518394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.518660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.518693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.518916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.519083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.519114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.519295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.519574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.519605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.519960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.520253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.520291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.520531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.520820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.520853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.521035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.521275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.521307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.521597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.521867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.521900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.522067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.522348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.522379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.522688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.522945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.522976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.462 qpair failed and we were unable to recover it. 00:33:16.462 [2024-04-17 10:29:49.523199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.462 [2024-04-17 10:29:49.523546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.523577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.523810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.523999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.524012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.524209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.524341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.524371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.524630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.524815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.524845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.525159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.525454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.525491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.525719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.525907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.525919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.526166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.526456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.526486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.526658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.526877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.526908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.527148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.527449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.527480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.527876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.528043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.528074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.528308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.528601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.528633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.528886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.529142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.529174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.529491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.529732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.529745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.529870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.530128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.530159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.530420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.530664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.530697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.530883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.531221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.531252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.531472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.531796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.531809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.532042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.532220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.532252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.532549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.532750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.532784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.532981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.533198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.533231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.533483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.533722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.533755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.535413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.535655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.535671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.535930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.536106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.536136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.537332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.537902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.537926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.538162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.538412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.538423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.538625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.538870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.538884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.539015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.539188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.539200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.539456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.539710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.463 [2024-04-17 10:29:49.539724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.463 qpair failed and we were unable to recover it. 00:33:16.463 [2024-04-17 10:29:49.539947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.540091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.540103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.540393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.540655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.540668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.540866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.541044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.541056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.541252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.541458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.541470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.541658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.541799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.541811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.541925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.542186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.542200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.542485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.542677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.542689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.542893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.543027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.543040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.543208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.543535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.543547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.543771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.544029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.544040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.544291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.544534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.544546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.544756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.544878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.544890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.544998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.545218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.545229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.545423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.545610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.545623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.545908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.546155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.546168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.546458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.546652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.546664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.546893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.547038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.547051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.547236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.547501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.547514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.547717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.547905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.547918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.548057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.548247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.548259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.548475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.548680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.548692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.548915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.549106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.549118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.549389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.549666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.549679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.549899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.550096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.550108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.550321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.550564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.550577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.550831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.550969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.550982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.551128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.551311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.551323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.551617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.551750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.551764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.464 qpair failed and we were unable to recover it. 00:33:16.464 [2024-04-17 10:29:49.551964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.552210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.464 [2024-04-17 10:29:49.552222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.465 qpair failed and we were unable to recover it. 00:33:16.465 [2024-04-17 10:29:49.552429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.552678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.552691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.465 qpair failed and we were unable to recover it. 00:33:16.465 [2024-04-17 10:29:49.552964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.553159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.553171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.465 qpair failed and we were unable to recover it. 00:33:16.465 [2024-04-17 10:29:49.553480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.553734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.553746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.465 qpair failed and we were unable to recover it. 00:33:16.465 [2024-04-17 10:29:49.553923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.554199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.554211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.465 qpair failed and we were unable to recover it. 00:33:16.465 [2024-04-17 10:29:49.554519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.554713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.554726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.465 qpair failed and we were unable to recover it. 00:33:16.465 [2024-04-17 10:29:49.554974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.555164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.555176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.465 qpair failed and we were unable to recover it. 00:33:16.465 [2024-04-17 10:29:49.555387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.555501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.555513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.465 qpair failed and we were unable to recover it. 00:33:16.465 [2024-04-17 10:29:49.555631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.555780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.555791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.465 qpair failed and we were unable to recover it. 00:33:16.465 [2024-04-17 10:29:49.555986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.556176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.556188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.465 qpair failed and we were unable to recover it. 00:33:16.465 [2024-04-17 10:29:49.556494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.556669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.556683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.465 qpair failed and we were unable to recover it. 00:33:16.465 [2024-04-17 10:29:49.556963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.557170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.557181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.465 qpair failed and we were unable to recover it. 00:33:16.465 [2024-04-17 10:29:49.557471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.557680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.557691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.465 qpair failed and we were unable to recover it. 00:33:16.465 [2024-04-17 10:29:49.557886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.558133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.558145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.465 qpair failed and we were unable to recover it. 00:33:16.465 [2024-04-17 10:29:49.558395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.558653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.558665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.465 qpair failed and we were unable to recover it. 00:33:16.465 [2024-04-17 10:29:49.558946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.559210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.559221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.465 qpair failed and we were unable to recover it. 00:33:16.465 [2024-04-17 10:29:49.559423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.559725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.559738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.465 qpair failed and we were unable to recover it. 00:33:16.465 [2024-04-17 10:29:49.559886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.560156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.560168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.465 qpair failed and we were unable to recover it. 00:33:16.465 [2024-04-17 10:29:49.560352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.560535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.560547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.465 qpair failed and we were unable to recover it. 00:33:16.465 [2024-04-17 10:29:49.560744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.560991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.561004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.465 qpair failed and we were unable to recover it. 00:33:16.465 [2024-04-17 10:29:49.561204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.561392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.561403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.465 qpair failed and we were unable to recover it. 00:33:16.465 [2024-04-17 10:29:49.561685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.561824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.561837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.465 qpair failed and we were unable to recover it. 00:33:16.465 [2024-04-17 10:29:49.562033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.562322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.562334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.465 qpair failed and we were unable to recover it. 00:33:16.465 [2024-04-17 10:29:49.562603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.562711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.562723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.465 qpair failed and we were unable to recover it. 00:33:16.465 [2024-04-17 10:29:49.562946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.563218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.563229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.465 qpair failed and we were unable to recover it. 00:33:16.465 [2024-04-17 10:29:49.563420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.563702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.563713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.465 qpair failed and we were unable to recover it. 00:33:16.465 [2024-04-17 10:29:49.564016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.564294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.465 [2024-04-17 10:29:49.564306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.465 qpair failed and we were unable to recover it. 00:33:16.466 [2024-04-17 10:29:49.564584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.564711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.564723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.466 qpair failed and we were unable to recover it. 00:33:16.466 [2024-04-17 10:29:49.564903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.565078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.565089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.466 qpair failed and we were unable to recover it. 00:33:16.466 [2024-04-17 10:29:49.565396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.565697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.565709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.466 qpair failed and we were unable to recover it. 00:33:16.466 [2024-04-17 10:29:49.565978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.566223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.566234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.466 qpair failed and we were unable to recover it. 00:33:16.466 [2024-04-17 10:29:49.566534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.566732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.566744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.466 qpair failed and we were unable to recover it. 00:33:16.466 [2024-04-17 10:29:49.567024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.567153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.567164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.466 qpair failed and we were unable to recover it. 00:33:16.466 [2024-04-17 10:29:49.567357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.567628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.567640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.466 qpair failed and we were unable to recover it. 00:33:16.466 [2024-04-17 10:29:49.567923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.568121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.568133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.466 qpair failed and we were unable to recover it. 00:33:16.466 [2024-04-17 10:29:49.568276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.568457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.568470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.466 qpair failed and we were unable to recover it. 00:33:16.466 [2024-04-17 10:29:49.568683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.568884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.568896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.466 qpair failed and we were unable to recover it. 00:33:16.466 [2024-04-17 10:29:49.569081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.569333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.569345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.466 qpair failed and we were unable to recover it. 00:33:16.466 [2024-04-17 10:29:49.569526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.569768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.569780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.466 qpair failed and we were unable to recover it. 00:33:16.466 [2024-04-17 10:29:49.569968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.570249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.570260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.466 qpair failed and we were unable to recover it. 00:33:16.466 [2024-04-17 10:29:49.570380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.570635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.570651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.466 qpair failed and we were unable to recover it. 00:33:16.466 [2024-04-17 10:29:49.570975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.571161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.571173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.466 qpair failed and we were unable to recover it. 00:33:16.466 [2024-04-17 10:29:49.571450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.571652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.571664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.466 qpair failed and we were unable to recover it. 00:33:16.466 [2024-04-17 10:29:49.571818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.571952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.571963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.466 qpair failed and we were unable to recover it. 00:33:16.466 [2024-04-17 10:29:49.572240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.572428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.572440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.466 qpair failed and we were unable to recover it. 00:33:16.466 [2024-04-17 10:29:49.572556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.572697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.572709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.466 qpair failed and we were unable to recover it. 00:33:16.466 [2024-04-17 10:29:49.572831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.573100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.573112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.466 qpair failed and we were unable to recover it. 00:33:16.466 [2024-04-17 10:29:49.573306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.573522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.466 [2024-04-17 10:29:49.573533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.466 qpair failed and we were unable to recover it. 00:33:16.467 [2024-04-17 10:29:49.573703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.573973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.573985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.467 qpair failed and we were unable to recover it. 00:33:16.467 [2024-04-17 10:29:49.574155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.574324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.574336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.467 qpair failed and we were unable to recover it. 00:33:16.467 [2024-04-17 10:29:49.574609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.574796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.574808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.467 qpair failed and we were unable to recover it. 00:33:16.467 [2024-04-17 10:29:49.575017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.575278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.575289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.467 qpair failed and we were unable to recover it. 00:33:16.467 [2024-04-17 10:29:49.575543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.575728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.575740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.467 qpair failed and we were unable to recover it. 00:33:16.467 [2024-04-17 10:29:49.575952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.576142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.576154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.467 qpair failed and we were unable to recover it. 00:33:16.467 [2024-04-17 10:29:49.576325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.576624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.576635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.467 qpair failed and we were unable to recover it. 00:33:16.467 [2024-04-17 10:29:49.576811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.577018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.577029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.467 qpair failed and we were unable to recover it. 00:33:16.467 [2024-04-17 10:29:49.577222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.577533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.577544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.467 qpair failed and we were unable to recover it. 00:33:16.467 [2024-04-17 10:29:49.577817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.578047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.578059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.467 qpair failed and we were unable to recover it. 00:33:16.467 [2024-04-17 10:29:49.578201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.578477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.578489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.467 qpair failed and we were unable to recover it. 00:33:16.467 [2024-04-17 10:29:49.578659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.578931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.578943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.467 qpair failed and we were unable to recover it. 00:33:16.467 [2024-04-17 10:29:49.579195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.579448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.579459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.467 qpair failed and we were unable to recover it. 00:33:16.467 [2024-04-17 10:29:49.579633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.579848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.579859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.467 qpair failed and we were unable to recover it. 00:33:16.467 [2024-04-17 10:29:49.580028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.580238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.580249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.467 qpair failed and we were unable to recover it. 00:33:16.467 [2024-04-17 10:29:49.580417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.580678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.580690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.467 qpair failed and we were unable to recover it. 00:33:16.467 [2024-04-17 10:29:49.580908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.581080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.581091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.467 qpair failed and we were unable to recover it. 00:33:16.467 [2024-04-17 10:29:49.581280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.581457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.581469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.467 qpair failed and we were unable to recover it. 00:33:16.467 [2024-04-17 10:29:49.581658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.581863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.581876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.467 qpair failed and we were unable to recover it. 00:33:16.467 [2024-04-17 10:29:49.582123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.582219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.582231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.467 qpair failed and we were unable to recover it. 00:33:16.467 [2024-04-17 10:29:49.582412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.582541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.582552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.467 qpair failed and we were unable to recover it. 00:33:16.467 [2024-04-17 10:29:49.582749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.583071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.583082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.467 qpair failed and we were unable to recover it. 00:33:16.467 [2024-04-17 10:29:49.583347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.583469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.583480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.467 qpair failed and we were unable to recover it. 00:33:16.467 [2024-04-17 10:29:49.583594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.583783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.583794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.467 qpair failed and we were unable to recover it. 00:33:16.467 [2024-04-17 10:29:49.583980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.584272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.584285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.467 qpair failed and we were unable to recover it. 00:33:16.467 [2024-04-17 10:29:49.584570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.584740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.584752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.467 qpair failed and we were unable to recover it. 00:33:16.467 [2024-04-17 10:29:49.585017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.467 [2024-04-17 10:29:49.585219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.585230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.468 [2024-04-17 10:29:49.585444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.585688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.585700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.468 [2024-04-17 10:29:49.585984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.586229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.586240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.468 [2024-04-17 10:29:49.586483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.586586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.586597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.468 [2024-04-17 10:29:49.586784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.586958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.586969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.468 [2024-04-17 10:29:49.587154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.587327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.587341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.468 [2024-04-17 10:29:49.587604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.587815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.587826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.468 [2024-04-17 10:29:49.588114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.588387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.588398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.468 [2024-04-17 10:29:49.588659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.588828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.588839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.468 [2024-04-17 10:29:49.589010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.589274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.589285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.468 [2024-04-17 10:29:49.589474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.589744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.589756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.468 [2024-04-17 10:29:49.590021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.590282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.590293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.468 [2024-04-17 10:29:49.590483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.590685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.590696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.468 [2024-04-17 10:29:49.590882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.590998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.591009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.468 [2024-04-17 10:29:49.591264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.591434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.591445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.468 [2024-04-17 10:29:49.591554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.591739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.591754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.468 [2024-04-17 10:29:49.591944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.592212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.592223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.468 [2024-04-17 10:29:49.592489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.592731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.592742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.468 [2024-04-17 10:29:49.592926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.593188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.593199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.468 [2024-04-17 10:29:49.593464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.593709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.593720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.468 [2024-04-17 10:29:49.593982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.594160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.594171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.468 [2024-04-17 10:29:49.594435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.594677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.594687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.468 [2024-04-17 10:29:49.594948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.595147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.595158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.468 [2024-04-17 10:29:49.595411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.595704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.595716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.468 [2024-04-17 10:29:49.595978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.596243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.596254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.468 [2024-04-17 10:29:49.596425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.596640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.596661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.468 [2024-04-17 10:29:49.596844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.596978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.596989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.468 [2024-04-17 10:29:49.597256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.597506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.468 [2024-04-17 10:29:49.597517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.468 qpair failed and we were unable to recover it. 00:33:16.469 [2024-04-17 10:29:49.597769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.598035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.598046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.469 qpair failed and we were unable to recover it. 00:33:16.469 [2024-04-17 10:29:49.598306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.598555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.598566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.469 qpair failed and we were unable to recover it. 00:33:16.469 [2024-04-17 10:29:49.598738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.599024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.599034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.469 qpair failed and we were unable to recover it. 00:33:16.469 [2024-04-17 10:29:49.599301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.599538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.599549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.469 qpair failed and we were unable to recover it. 00:33:16.469 [2024-04-17 10:29:49.599741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.600020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.600031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.469 qpair failed and we were unable to recover it. 00:33:16.469 [2024-04-17 10:29:49.600225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.600483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.600494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.469 qpair failed and we were unable to recover it. 00:33:16.469 [2024-04-17 10:29:49.600759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.601040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.601051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.469 qpair failed and we were unable to recover it. 00:33:16.469 [2024-04-17 10:29:49.601259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.601440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.601455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.469 qpair failed and we were unable to recover it. 00:33:16.469 [2024-04-17 10:29:49.601733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.601846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.601857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.469 qpair failed and we were unable to recover it. 00:33:16.469 [2024-04-17 10:29:49.602030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.602206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.602218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.469 qpair failed and we were unable to recover it. 00:33:16.469 [2024-04-17 10:29:49.602379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.602549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.602561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.469 qpair failed and we were unable to recover it. 00:33:16.469 [2024-04-17 10:29:49.602801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.602976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.602987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.469 qpair failed and we were unable to recover it. 00:33:16.469 [2024-04-17 10:29:49.603178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.603380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.603391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.469 qpair failed and we were unable to recover it. 00:33:16.469 [2024-04-17 10:29:49.603653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.603896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.603908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.469 qpair failed and we were unable to recover it. 00:33:16.469 [2024-04-17 10:29:49.604168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.604404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.604415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.469 qpair failed and we were unable to recover it. 00:33:16.469 [2024-04-17 10:29:49.604678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.604915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.604926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.469 qpair failed and we were unable to recover it. 00:33:16.469 [2024-04-17 10:29:49.605194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.605376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.605387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.469 qpair failed and we were unable to recover it. 00:33:16.469 [2024-04-17 10:29:49.605622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.605864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.605875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.469 qpair failed and we were unable to recover it. 00:33:16.469 [2024-04-17 10:29:49.606060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.606224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.606236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.469 qpair failed and we were unable to recover it. 00:33:16.469 [2024-04-17 10:29:49.606401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.606662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.606673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.469 qpair failed and we were unable to recover it. 00:33:16.469 [2024-04-17 10:29:49.606954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.607068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.607078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.469 qpair failed and we were unable to recover it. 00:33:16.469 [2024-04-17 10:29:49.607334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.607503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.607514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.469 qpair failed and we were unable to recover it. 00:33:16.469 [2024-04-17 10:29:49.607699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.607970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.607981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.469 qpair failed and we were unable to recover it. 00:33:16.469 [2024-04-17 10:29:49.608246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.608497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.608507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.469 qpair failed and we were unable to recover it. 00:33:16.469 [2024-04-17 10:29:49.608762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.608865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.608875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.469 qpair failed and we were unable to recover it. 00:33:16.469 [2024-04-17 10:29:49.609144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.609430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.609441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.469 qpair failed and we were unable to recover it. 00:33:16.469 [2024-04-17 10:29:49.609719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.469 [2024-04-17 10:29:49.609956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.609967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.470 qpair failed and we were unable to recover it. 00:33:16.470 [2024-04-17 10:29:49.610152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.610438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.610449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.470 qpair failed and we were unable to recover it. 00:33:16.470 [2024-04-17 10:29:49.610654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.610863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.610873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.470 qpair failed and we were unable to recover it. 00:33:16.470 [2024-04-17 10:29:49.611134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.611336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.611346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.470 qpair failed and we were unable to recover it. 00:33:16.470 [2024-04-17 10:29:49.611650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.611888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.611899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.470 qpair failed and we were unable to recover it. 00:33:16.470 [2024-04-17 10:29:49.612080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.612203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.612214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.470 qpair failed and we were unable to recover it. 00:33:16.470 [2024-04-17 10:29:49.612378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.612544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.612555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.470 qpair failed and we were unable to recover it. 00:33:16.470 [2024-04-17 10:29:49.612731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.612907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.612917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.470 qpair failed and we were unable to recover it. 00:33:16.470 [2024-04-17 10:29:49.613154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.613359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.613369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.470 qpair failed and we were unable to recover it. 00:33:16.470 [2024-04-17 10:29:49.613536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.613811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.613822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.470 qpair failed and we were unable to recover it. 00:33:16.470 [2024-04-17 10:29:49.614008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.614173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.614184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.470 qpair failed and we were unable to recover it. 00:33:16.470 [2024-04-17 10:29:49.614384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.614557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.614567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.470 qpair failed and we were unable to recover it. 00:33:16.470 [2024-04-17 10:29:49.614832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.614996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.615007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.470 qpair failed and we were unable to recover it. 00:33:16.470 [2024-04-17 10:29:49.615170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.615408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.615419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.470 qpair failed and we were unable to recover it. 00:33:16.470 [2024-04-17 10:29:49.615611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.615902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.615913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.470 qpair failed and we were unable to recover it. 00:33:16.470 [2024-04-17 10:29:49.616126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.616389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.616399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.470 qpair failed and we were unable to recover it. 00:33:16.470 [2024-04-17 10:29:49.616592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.616815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.616826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.470 qpair failed and we were unable to recover it. 00:33:16.470 [2024-04-17 10:29:49.617007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.617287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.617298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.470 qpair failed and we were unable to recover it. 00:33:16.470 [2024-04-17 10:29:49.617429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.617598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.617608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.470 qpair failed and we were unable to recover it. 00:33:16.470 [2024-04-17 10:29:49.617797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.618086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.618099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.470 qpair failed and we were unable to recover it. 00:33:16.470 [2024-04-17 10:29:49.618334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.618575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.618586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.470 qpair failed and we were unable to recover it. 00:33:16.470 [2024-04-17 10:29:49.618843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.619021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.619032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.470 qpair failed and we were unable to recover it. 00:33:16.470 [2024-04-17 10:29:49.619258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.619521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.619531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.470 qpair failed and we were unable to recover it. 00:33:16.470 [2024-04-17 10:29:49.619837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.620002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.620012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.470 qpair failed and we were unable to recover it. 00:33:16.470 [2024-04-17 10:29:49.620273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.620457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.620468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.470 qpair failed and we were unable to recover it. 00:33:16.470 [2024-04-17 10:29:49.620753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.620870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.620882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.470 qpair failed and we were unable to recover it. 00:33:16.470 [2024-04-17 10:29:49.621066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.621262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.470 [2024-04-17 10:29:49.621272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.471 qpair failed and we were unable to recover it. 00:33:16.471 [2024-04-17 10:29:49.621534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.621733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.621744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.471 qpair failed and we were unable to recover it. 00:33:16.471 [2024-04-17 10:29:49.621948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.622158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.622170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.471 qpair failed and we were unable to recover it. 00:33:16.471 [2024-04-17 10:29:49.622447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.622566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.622577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.471 qpair failed and we were unable to recover it. 00:33:16.471 [2024-04-17 10:29:49.622827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.623031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.623042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.471 qpair failed and we were unable to recover it. 00:33:16.471 [2024-04-17 10:29:49.623155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.623356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.623367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.471 qpair failed and we were unable to recover it. 00:33:16.471 [2024-04-17 10:29:49.623576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.623857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.623867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.471 qpair failed and we were unable to recover it. 00:33:16.471 [2024-04-17 10:29:49.624044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.624229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.624239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.471 qpair failed and we were unable to recover it. 00:33:16.471 [2024-04-17 10:29:49.624486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.624688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.624699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.471 qpair failed and we were unable to recover it. 00:33:16.471 [2024-04-17 10:29:49.624882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.625131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.625141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.471 qpair failed and we were unable to recover it. 00:33:16.471 [2024-04-17 10:29:49.625341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.625578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.625589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.471 qpair failed and we were unable to recover it. 00:33:16.471 [2024-04-17 10:29:49.625756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.625922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.625932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.471 qpair failed and we were unable to recover it. 00:33:16.471 [2024-04-17 10:29:49.626098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.626279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.626307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.471 qpair failed and we were unable to recover it. 00:33:16.471 [2024-04-17 10:29:49.626547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.626761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.626773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.471 qpair failed and we were unable to recover it. 00:33:16.471 [2024-04-17 10:29:49.626946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.627191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.627201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.471 qpair failed and we were unable to recover it. 00:33:16.471 [2024-04-17 10:29:49.627446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.627634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.627649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.471 qpair failed and we were unable to recover it. 00:33:16.471 [2024-04-17 10:29:49.627924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.628137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.628147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.471 qpair failed and we were unable to recover it. 00:33:16.471 [2024-04-17 10:29:49.628311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.628522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.628532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.471 qpair failed and we were unable to recover it. 00:33:16.471 [2024-04-17 10:29:49.628696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.628975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.628985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.471 qpair failed and we were unable to recover it. 00:33:16.471 [2024-04-17 10:29:49.629194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.629392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.629404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.471 qpair failed and we were unable to recover it. 00:33:16.471 [2024-04-17 10:29:49.629638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.629877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.629887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.471 qpair failed and we were unable to recover it. 00:33:16.471 [2024-04-17 10:29:49.630170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.630334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.630345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.471 qpair failed and we were unable to recover it. 00:33:16.471 [2024-04-17 10:29:49.630526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.630651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.630661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.471 qpair failed and we were unable to recover it. 00:33:16.471 [2024-04-17 10:29:49.630782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.630884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.471 [2024-04-17 10:29:49.630895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.471 qpair failed and we were unable to recover it. 00:33:16.472 [2024-04-17 10:29:49.631062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.631313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.631323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.472 qpair failed and we were unable to recover it. 00:33:16.472 [2024-04-17 10:29:49.631488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.631716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.631728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.472 qpair failed and we were unable to recover it. 00:33:16.472 [2024-04-17 10:29:49.631914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.632146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.632157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.472 qpair failed and we were unable to recover it. 00:33:16.472 [2024-04-17 10:29:49.632414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.632605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.632616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.472 qpair failed and we were unable to recover it. 00:33:16.472 [2024-04-17 10:29:49.632795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.633079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.633089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.472 qpair failed and we were unable to recover it. 00:33:16.472 [2024-04-17 10:29:49.633276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.633535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.633545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.472 qpair failed and we were unable to recover it. 00:33:16.472 [2024-04-17 10:29:49.633776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.634054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.634065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.472 qpair failed and we were unable to recover it. 00:33:16.472 [2024-04-17 10:29:49.634231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.634561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.634571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.472 qpair failed and we were unable to recover it. 00:33:16.472 [2024-04-17 10:29:49.634746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.634923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.634934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.472 qpair failed and we were unable to recover it. 00:33:16.472 [2024-04-17 10:29:49.635049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.635217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.635229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.472 qpair failed and we were unable to recover it. 00:33:16.472 [2024-04-17 10:29:49.635485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.635650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.635661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.472 qpair failed and we were unable to recover it. 00:33:16.472 [2024-04-17 10:29:49.635936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.636148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.636160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.472 qpair failed and we were unable to recover it. 00:33:16.472 [2024-04-17 10:29:49.636454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.636720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.636734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.472 qpair failed and we were unable to recover it. 00:33:16.472 [2024-04-17 10:29:49.636900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.637112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.637123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.472 qpair failed and we were unable to recover it. 00:33:16.472 [2024-04-17 10:29:49.637304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.637568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.637579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.472 qpair failed and we were unable to recover it. 00:33:16.472 [2024-04-17 10:29:49.637818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.638064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.638075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.472 qpair failed and we were unable to recover it. 00:33:16.472 [2024-04-17 10:29:49.638364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.638654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.638666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.472 qpair failed and we were unable to recover it. 00:33:16.472 [2024-04-17 10:29:49.638943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.639123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.639134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.472 qpair failed and we were unable to recover it. 00:33:16.472 [2024-04-17 10:29:49.639323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.639507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.639518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.472 qpair failed and we were unable to recover it. 00:33:16.472 [2024-04-17 10:29:49.639721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.639987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.639999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.472 qpair failed and we were unable to recover it. 00:33:16.472 [2024-04-17 10:29:49.640178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.640348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.640358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.472 qpair failed and we were unable to recover it. 00:33:16.472 [2024-04-17 10:29:49.640530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.640628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.640638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.472 qpair failed and we were unable to recover it. 00:33:16.472 [2024-04-17 10:29:49.640876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.641057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.641068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.472 qpair failed and we were unable to recover it. 00:33:16.472 [2024-04-17 10:29:49.641238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.641412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.641422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.472 qpair failed and we were unable to recover it. 00:33:16.472 [2024-04-17 10:29:49.641712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.641891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.641902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.472 qpair failed and we were unable to recover it. 00:33:16.472 [2024-04-17 10:29:49.642197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.642301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.472 [2024-04-17 10:29:49.642313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.473 qpair failed and we were unable to recover it. 00:33:16.473 [2024-04-17 10:29:49.642575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.642745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.642757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.473 qpair failed and we were unable to recover it. 00:33:16.473 [2024-04-17 10:29:49.642878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.643113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.643124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.473 qpair failed and we were unable to recover it. 00:33:16.473 [2024-04-17 10:29:49.643326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.643588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.643599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.473 qpair failed and we were unable to recover it. 00:33:16.473 [2024-04-17 10:29:49.643838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.644008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.644018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.473 qpair failed and we were unable to recover it. 00:33:16.473 [2024-04-17 10:29:49.644178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.644409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.644420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.473 qpair failed and we were unable to recover it. 00:33:16.473 [2024-04-17 10:29:49.644682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.644915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.644925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.473 qpair failed and we were unable to recover it. 00:33:16.473 [2024-04-17 10:29:49.645189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.645306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.645316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.473 qpair failed and we were unable to recover it. 00:33:16.473 [2024-04-17 10:29:49.645517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.645607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.645619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.473 qpair failed and we were unable to recover it. 00:33:16.473 [2024-04-17 10:29:49.645722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.645959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.645970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.473 qpair failed and we were unable to recover it. 00:33:16.473 [2024-04-17 10:29:49.646201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.646383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.646394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.473 qpair failed and we were unable to recover it. 00:33:16.473 [2024-04-17 10:29:49.646651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.646867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.646878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.473 qpair failed and we were unable to recover it. 00:33:16.473 [2024-04-17 10:29:49.647135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.647253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.647264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.473 qpair failed and we were unable to recover it. 00:33:16.473 [2024-04-17 10:29:49.647551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.647651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.647661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.473 qpair failed and we were unable to recover it. 00:33:16.473 [2024-04-17 10:29:49.647916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.648123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.648134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.473 qpair failed and we were unable to recover it. 00:33:16.473 [2024-04-17 10:29:49.648297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.648514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.648526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.473 qpair failed and we were unable to recover it. 00:33:16.473 [2024-04-17 10:29:49.648659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.648821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.648831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.473 qpair failed and we were unable to recover it. 00:33:16.473 [2024-04-17 10:29:49.649000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.649162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.649173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.473 qpair failed and we were unable to recover it. 00:33:16.473 [2024-04-17 10:29:49.649442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.649619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.649629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.473 qpair failed and we were unable to recover it. 00:33:16.473 [2024-04-17 10:29:49.649852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.650034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.650044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.473 qpair failed and we were unable to recover it. 00:33:16.473 [2024-04-17 10:29:49.650303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.650486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.650497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.473 qpair failed and we were unable to recover it. 00:33:16.473 [2024-04-17 10:29:49.650629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.650920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.650934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.473 qpair failed and we were unable to recover it. 00:33:16.473 [2024-04-17 10:29:49.651133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.651339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.651349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.473 qpair failed and we were unable to recover it. 00:33:16.473 [2024-04-17 10:29:49.651472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.651601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.651611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.473 qpair failed and we were unable to recover it. 00:33:16.473 [2024-04-17 10:29:49.651790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.652069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.652080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.473 qpair failed and we were unable to recover it. 00:33:16.473 [2024-04-17 10:29:49.652261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.652455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.652465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.473 qpair failed and we were unable to recover it. 00:33:16.473 [2024-04-17 10:29:49.652590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.652786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.473 [2024-04-17 10:29:49.652798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.652983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.653163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.653175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.653354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.653607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.653619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.653825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.654035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.654045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.654167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.654433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.654444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.654565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.654756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.654768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.655027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.655306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.655316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.655450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.655653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.655665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.655878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.656014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.656025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.656153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.656422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.656433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.656602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.656777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.656790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.656969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.657194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.657207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.657513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.657767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.657779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.658013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.658175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.658186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.658306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.658479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.658490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.658589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.658763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.658775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.659021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.659378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.659390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.659570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.659749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.659761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.659956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.660239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.660250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.660558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.660757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.660768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.660886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.661121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.661133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.661242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.661420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.661434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.661697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.661872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.661884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.662159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.662342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.662353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.662534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.662741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.662753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.662882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.663065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.663076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.663256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.663441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.663452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.663652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.663774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.663785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.474 qpair failed and we were unable to recover it. 00:33:16.474 [2024-04-17 10:29:49.663955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.664215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.474 [2024-04-17 10:29:49.664226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.475 qpair failed and we were unable to recover it. 00:33:16.475 [2024-04-17 10:29:49.664485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.664659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.664670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.475 qpair failed and we were unable to recover it. 00:33:16.475 [2024-04-17 10:29:49.664807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.664967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.664979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.475 qpair failed and we were unable to recover it. 00:33:16.475 [2024-04-17 10:29:49.665090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.665292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.665306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.475 qpair failed and we were unable to recover it. 00:33:16.475 [2024-04-17 10:29:49.665610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.665804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.665816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.475 qpair failed and we were unable to recover it. 00:33:16.475 [2024-04-17 10:29:49.666098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.666278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.666309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.475 qpair failed and we were unable to recover it. 00:33:16.475 [2024-04-17 10:29:49.666554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.666724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.666735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.475 qpair failed and we were unable to recover it. 00:33:16.475 [2024-04-17 10:29:49.666936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.667070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.667109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.475 qpair failed and we were unable to recover it. 00:33:16.475 [2024-04-17 10:29:49.667445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.667676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.667710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.475 qpair failed and we were unable to recover it. 00:33:16.475 [2024-04-17 10:29:49.668037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.668210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.668221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.475 qpair failed and we were unable to recover it. 00:33:16.475 [2024-04-17 10:29:49.668460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.668724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.668736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.475 qpair failed and we were unable to recover it. 00:33:16.475 [2024-04-17 10:29:49.668893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.669094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.669105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.475 qpair failed and we were unable to recover it. 00:33:16.475 [2024-04-17 10:29:49.669309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.669594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.669605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.475 qpair failed and we were unable to recover it. 00:33:16.475 [2024-04-17 10:29:49.669855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.670112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.670125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.475 qpair failed and we were unable to recover it. 00:33:16.475 [2024-04-17 10:29:49.670288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.670476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.670487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.475 qpair failed and we were unable to recover it. 00:33:16.475 [2024-04-17 10:29:49.670696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.670882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.670893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.475 qpair failed and we were unable to recover it. 00:33:16.475 [2024-04-17 10:29:49.670986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.671282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.671293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.475 qpair failed and we were unable to recover it. 00:33:16.475 [2024-04-17 10:29:49.671474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.671655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.671666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.475 qpair failed and we were unable to recover it. 00:33:16.475 [2024-04-17 10:29:49.671856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.671974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.671984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.475 qpair failed and we were unable to recover it. 00:33:16.475 [2024-04-17 10:29:49.672164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.672356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.672367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.475 qpair failed and we were unable to recover it. 00:33:16.475 [2024-04-17 10:29:49.672487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.672597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.672608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.475 qpair failed and we were unable to recover it. 00:33:16.475 [2024-04-17 10:29:49.672806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.673012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.673024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.475 qpair failed and we were unable to recover it. 00:33:16.475 [2024-04-17 10:29:49.673318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.673534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.673545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.475 qpair failed and we were unable to recover it. 00:33:16.475 [2024-04-17 10:29:49.673721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.673915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.673927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.475 qpair failed and we were unable to recover it. 00:33:16.475 [2024-04-17 10:29:49.674061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.674236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.475 [2024-04-17 10:29:49.674248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.674425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.674535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.674545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.674727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.674933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.674965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.675121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.675339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.675369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.675672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.675896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.675927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.676081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.676247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.676277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.676492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.676694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.676726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.677042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.677159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.677170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.677328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.677499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.677509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.677731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.677922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.677932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.678135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.678315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.678325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.678496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.678672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.678684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.678863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.679134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.679144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.679477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.679588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.679598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.679792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.679973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.679983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.680098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.680229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.680239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.680482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.680605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.680615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.680795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.680897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.680907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.681017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.681117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.681130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.681401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.681687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.681698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.681886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.682017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.682028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.682160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.682363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.682373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.682547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.682754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.682767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.682996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.683121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.683131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.683295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.683492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.683503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.683602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.683714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.683725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.683908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.684037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.684047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.684239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.684345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.684355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.476 qpair failed and we were unable to recover it. 00:33:16.476 [2024-04-17 10:29:49.684497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.476 [2024-04-17 10:29:49.684657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.684669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.684871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.684987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.684999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.685108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.685268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.685279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.685522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.685724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.685755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.685986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.686144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.686174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.686323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.686631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.686675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.686869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.687052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.687062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.687329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.687507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.687519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.687725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.687838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.687851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.687954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.688187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.688198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.688313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.688496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.688508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.688763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.688911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.688922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.689048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.689167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.689178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.689441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.689599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.689609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.689719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.689983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.689994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.690119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.690255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.690264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.690467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.690715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.690727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.690910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.691088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.691098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.691208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.691478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.691509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.691772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.691953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.691964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.692080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.692256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.692268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.692524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.692641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.692656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.692792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.692973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.693014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.693158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.693298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.693328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.693605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.693755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.693766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.693935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.694043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.694052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.694165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.694287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.694297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.694571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.694828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.694839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.695020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.695869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.477 [2024-04-17 10:29:49.695891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.477 qpair failed and we were unable to recover it. 00:33:16.477 [2024-04-17 10:29:49.696102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.696351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.696382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.696598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.696908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.696919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.697044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.697267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.697277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.697478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.697741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.697753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.697918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.698009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.698019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.698130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.698295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.698306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.698478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.698669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.698679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.698884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.699048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.699059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.699316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.699555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.699565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.699844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.699964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.699974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.700105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.700354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.700365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.700528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.700709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.700720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.700853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.701032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.701042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.701136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.701244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.701255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.701443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.701619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.701629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.701825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.702110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.702120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.702299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.702489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.702499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.702669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.702832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.702842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.703065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.703297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.703308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.703593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.703778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.703790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.703966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.704062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.704072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.704235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.704582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.704592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.704782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.704971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.704981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.705078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.705384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.705394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.705567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.705749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.705760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.705958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.706221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.706231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.706537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.706820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.706832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.707026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.707256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.707267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.478 qpair failed and we were unable to recover it. 00:33:16.478 [2024-04-17 10:29:49.707442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.478 [2024-04-17 10:29:49.707633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.707647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.707903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.708086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.708097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.708221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.708332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.708342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.708501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.708736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.708747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.708942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.709104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.709114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.709343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.709510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.709521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.709787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.709970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.709980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.710089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.710317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.710328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.710562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.710799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.710811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.710985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.711228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.711238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.711398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.711614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.711625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.711799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.711959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.711969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.712183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.712298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.712309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.712571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.712831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.712842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.713008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.713187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.713197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.713480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.713766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.713777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.713956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.714206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.714216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.714377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.714507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.714517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.714656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.714920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.714930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.715100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.715266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.715276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.715530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.715715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.715725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.715928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.716136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.716147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.716315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.716478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.716489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.716605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.716785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.716796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.717055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.717234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.717245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.717441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.717628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.717640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.717938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.718138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.718148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.718321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.718498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.718508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.718668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.718874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.479 [2024-04-17 10:29:49.718884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.479 qpair failed and we were unable to recover it. 00:33:16.479 [2024-04-17 10:29:49.719062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.719240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.719250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.480 qpair failed and we were unable to recover it. 00:33:16.480 [2024-04-17 10:29:49.719444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.719652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.719663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.480 qpair failed and we were unable to recover it. 00:33:16.480 [2024-04-17 10:29:49.719904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.720169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.720180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.480 qpair failed and we were unable to recover it. 00:33:16.480 [2024-04-17 10:29:49.720369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.720601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.720612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.480 qpair failed and we were unable to recover it. 00:33:16.480 [2024-04-17 10:29:49.720778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.720953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.720964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.480 qpair failed and we were unable to recover it. 00:33:16.480 [2024-04-17 10:29:49.721163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.721455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.721466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.480 qpair failed and we were unable to recover it. 00:33:16.480 [2024-04-17 10:29:49.721712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.721872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.721884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.480 qpair failed and we were unable to recover it. 00:33:16.480 [2024-04-17 10:29:49.722002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.722165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.722176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.480 qpair failed and we were unable to recover it. 00:33:16.480 [2024-04-17 10:29:49.722418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.722600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.722610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.480 qpair failed and we were unable to recover it. 00:33:16.480 [2024-04-17 10:29:49.722941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.723176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.723186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.480 qpair failed and we were unable to recover it. 00:33:16.480 [2024-04-17 10:29:49.723436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.723603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.723614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.480 qpair failed and we were unable to recover it. 00:33:16.480 [2024-04-17 10:29:49.723732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.723966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.723977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.480 qpair failed and we were unable to recover it. 00:33:16.480 [2024-04-17 10:29:49.724102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.724223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.724234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.480 qpair failed and we were unable to recover it. 00:33:16.480 [2024-04-17 10:29:49.724508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.724700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.724711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.480 qpair failed and we were unable to recover it. 00:33:16.480 [2024-04-17 10:29:49.724824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.725087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.725098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.480 qpair failed and we were unable to recover it. 00:33:16.480 [2024-04-17 10:29:49.725419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.725682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.725693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.480 qpair failed and we were unable to recover it. 00:33:16.480 [2024-04-17 10:29:49.725830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.726024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.726036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.480 qpair failed and we were unable to recover it. 00:33:16.480 [2024-04-17 10:29:49.726287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.726403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.726413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.480 qpair failed and we were unable to recover it. 00:33:16.480 [2024-04-17 10:29:49.726673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.726949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.726960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.480 qpair failed and we were unable to recover it. 00:33:16.480 [2024-04-17 10:29:49.727191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.727452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.727463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.480 qpair failed and we were unable to recover it. 00:33:16.480 [2024-04-17 10:29:49.727651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.727856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.727866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.480 qpair failed and we were unable to recover it. 00:33:16.480 [2024-04-17 10:29:49.728036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.728216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.728226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.480 qpair failed and we were unable to recover it. 00:33:16.480 [2024-04-17 10:29:49.728330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.728428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.728438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.480 qpair failed and we were unable to recover it. 00:33:16.480 [2024-04-17 10:29:49.728635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.728815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.480 [2024-04-17 10:29:49.728826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.480 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.729001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.729206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.729216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.729418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.729656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.729667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.729776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.729944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.729957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.730075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.730236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.730247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.730449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.730616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.730627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.730802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.730993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.731003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.731234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.731433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.731443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.731713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.731905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.731916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.732115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.732332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.732342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.732460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.732690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.732702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.732994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.733211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.733222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.733421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.733596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.733607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.733798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.734000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.734010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.734133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.734316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.734326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.734501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.734689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.734700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.734907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.735096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.735106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.735450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.735629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.735639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.735877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.736119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.736129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.736413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.736635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.736651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.736901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.737075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.737086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.737213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.737495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.737506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.737691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.737879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.737889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.738103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.738315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.738326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.738614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.738878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.738890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.739054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.739321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.739333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.739615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.739747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.739758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.740001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.740230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.740241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.740516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.740758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.740769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.481 qpair failed and we were unable to recover it. 00:33:16.481 [2024-04-17 10:29:49.740958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.481 [2024-04-17 10:29:49.741123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.741135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.741344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.741604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.741615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.741799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.741917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.741928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.742121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.742408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.742419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.742664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.742906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.742917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.743082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.743320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.743330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.743494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.743597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.743608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.743772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.743876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.743887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.744090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.744290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.744300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.744477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.744636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.744652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.744833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.745109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.745121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.745382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.745640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.745661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.745847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.746016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.746026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.746254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.746510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.746520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.746764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.746933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.746943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.747118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.747293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.747304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.747559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.747730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.747741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.747976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.748142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.748153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.748285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.748519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.748530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.748711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.748975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.748986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.749103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.749335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.749346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.749520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.749690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.749701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.749958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.750240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.750250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.750483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.750671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.750682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.750931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.751110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.751120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.751234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.751421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.751431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.751696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.751929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.751939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.752221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.752473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.752484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.752712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.752959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.752969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.753253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.753525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.753536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.482 qpair failed and we were unable to recover it. 00:33:16.482 [2024-04-17 10:29:49.753704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.482 [2024-04-17 10:29:49.753883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.753894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.754130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.754385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.754396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.754579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.754807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.754820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.754946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.755195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.755205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.755395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.755670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.755681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.755925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.756103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.756113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.756315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.756481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.756491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.756841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.757172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.757210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.757548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.757813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.757844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.758068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.758365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.758396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.758699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.758918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.758948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.759166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.759506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.759537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.759752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.760055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.760085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.760368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.760629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.760668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.760838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.761009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.761039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.761368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.761665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.761697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.761947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.762108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.762137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.762507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.762781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.762814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.763035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.763316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.763346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.763577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.763729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.763760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.764014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.764230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.764260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.764425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.764628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.764668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.764863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.765054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.765084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.765319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.765621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.765658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.765936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.766204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.766234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.766440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.766609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.766640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.766904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.767121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.767151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.767431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.767731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.767763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.767935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.768138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.768167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.768338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.768527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.768558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.768812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.768979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.769010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.483 qpair failed and we were unable to recover it. 00:33:16.483 [2024-04-17 10:29:49.769218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.769536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.483 [2024-04-17 10:29:49.769567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.484 qpair failed and we were unable to recover it. 00:33:16.484 [2024-04-17 10:29:49.769784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.770057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.770087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.484 qpair failed and we were unable to recover it. 00:33:16.484 [2024-04-17 10:29:49.770355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.770588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.770618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.484 qpair failed and we were unable to recover it. 00:33:16.484 [2024-04-17 10:29:49.770817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.771130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.771160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.484 qpair failed and we were unable to recover it. 00:33:16.484 [2024-04-17 10:29:49.771436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.771658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.771695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.484 qpair failed and we were unable to recover it. 00:33:16.484 [2024-04-17 10:29:49.771921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.772153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.772183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.484 qpair failed and we were unable to recover it. 00:33:16.484 [2024-04-17 10:29:49.772427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.772729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.772760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.484 qpair failed and we were unable to recover it. 00:33:16.484 [2024-04-17 10:29:49.773069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.773213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.773243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.484 qpair failed and we were unable to recover it. 00:33:16.484 [2024-04-17 10:29:49.773407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.773714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.773745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.484 qpair failed and we were unable to recover it. 00:33:16.484 [2024-04-17 10:29:49.773909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.774073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.774103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.484 qpair failed and we were unable to recover it. 00:33:16.484 [2024-04-17 10:29:49.774389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.774685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.774717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.484 qpair failed and we were unable to recover it. 00:33:16.484 [2024-04-17 10:29:49.775003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.775203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.775233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.484 qpair failed and we were unable to recover it. 00:33:16.484 [2024-04-17 10:29:49.775444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.775722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.775754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.484 qpair failed and we were unable to recover it. 00:33:16.484 [2024-04-17 10:29:49.775974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.776147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.776177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.484 qpair failed and we were unable to recover it. 00:33:16.484 [2024-04-17 10:29:49.776522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.776827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.776858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.484 qpair failed and we were unable to recover it. 00:33:16.484 [2024-04-17 10:29:49.777003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.777212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.777242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.484 qpair failed and we were unable to recover it. 00:33:16.484 [2024-04-17 10:29:49.777466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.777687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.777718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.484 qpair failed and we were unable to recover it. 00:33:16.484 [2024-04-17 10:29:49.777939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.778213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.778243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.484 qpair failed and we were unable to recover it. 00:33:16.484 [2024-04-17 10:29:49.778453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.778746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.778778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.484 qpair failed and we were unable to recover it. 00:33:16.484 [2024-04-17 10:29:49.778945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.779180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.779210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.484 qpair failed and we were unable to recover it. 00:33:16.484 [2024-04-17 10:29:49.779365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.779516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.779547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.484 qpair failed and we were unable to recover it. 00:33:16.484 [2024-04-17 10:29:49.779697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.779835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.779865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.484 qpair failed and we were unable to recover it. 00:33:16.484 [2024-04-17 10:29:49.780033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.780295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.484 [2024-04-17 10:29:49.780325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.484 qpair failed and we were unable to recover it. 00:33:16.759 [2024-04-17 10:29:49.780601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.780829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.780861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.759 qpair failed and we were unable to recover it. 00:33:16.759 [2024-04-17 10:29:49.781025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.781178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.781208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.759 qpair failed and we were unable to recover it. 00:33:16.759 [2024-04-17 10:29:49.781489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.781693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.781724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.759 qpair failed and we were unable to recover it. 00:33:16.759 [2024-04-17 10:29:49.781945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.782208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.782240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.759 qpair failed and we were unable to recover it. 00:33:16.759 [2024-04-17 10:29:49.782486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.782618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.782675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.759 qpair failed and we were unable to recover it. 00:33:16.759 [2024-04-17 10:29:49.782980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.783189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.783219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.759 qpair failed and we were unable to recover it. 00:33:16.759 [2024-04-17 10:29:49.783426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.783573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.783604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.759 qpair failed and we were unable to recover it. 00:33:16.759 [2024-04-17 10:29:49.783824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.784117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.784147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.759 qpair failed and we were unable to recover it. 00:33:16.759 [2024-04-17 10:29:49.784311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.784522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.784552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.759 qpair failed and we were unable to recover it. 00:33:16.759 [2024-04-17 10:29:49.784853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.785067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.785097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.759 qpair failed and we were unable to recover it. 00:33:16.759 [2024-04-17 10:29:49.785303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.785516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.785546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.759 qpair failed and we were unable to recover it. 00:33:16.759 [2024-04-17 10:29:49.785797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.785947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.785978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.759 qpair failed and we were unable to recover it. 00:33:16.759 [2024-04-17 10:29:49.786199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.786405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.786435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.759 qpair failed and we were unable to recover it. 00:33:16.759 [2024-04-17 10:29:49.786608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.786843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.759 [2024-04-17 10:29:49.786875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.759 qpair failed and we were unable to recover it. 00:33:16.759 [2024-04-17 10:29:49.787193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.787339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.787369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.787526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.787681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.787712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.787917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.788078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.788108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.788384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.788595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.788625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.788824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.789063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.789093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.789247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.789448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.789478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.789733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.789870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.789901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.790040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.790267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.790297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.790506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.790841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.790878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.791095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.791317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.791348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.791513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.791730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.791762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.792023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.792293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.792323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.792464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.792698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.792730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.792958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.793113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.793144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.793284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.793492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.793522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.793672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.793872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.793903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.794120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.794271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.794301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.794514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.794743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.794774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.794994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.795253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.795282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.795528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.795822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.795852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.796140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.796420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.796451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.796727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.796885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.796915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.797131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.797341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.797372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.797674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.797837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.797867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.798079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.798349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.798379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.798584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.798789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.798820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.798963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.799234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.799264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.799417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.799582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.799613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.760 [2024-04-17 10:29:49.799858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.799997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.760 [2024-04-17 10:29:49.800027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.760 qpair failed and we were unable to recover it. 00:33:16.761 [2024-04-17 10:29:49.800241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.800419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.800449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.761 qpair failed and we were unable to recover it. 00:33:16.761 [2024-04-17 10:29:49.800609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.800824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.800855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.761 qpair failed and we were unable to recover it. 00:33:16.761 [2024-04-17 10:29:49.801074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.801307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.801337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.761 qpair failed and we were unable to recover it. 00:33:16.761 [2024-04-17 10:29:49.801541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.801833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.801864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.761 qpair failed and we were unable to recover it. 00:33:16.761 [2024-04-17 10:29:49.802073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.802417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.802447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.761 qpair failed and we were unable to recover it. 00:33:16.761 [2024-04-17 10:29:49.802659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.802937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.802968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.761 qpair failed and we were unable to recover it. 00:33:16.761 [2024-04-17 10:29:49.803128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.803351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.803381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.761 qpair failed and we were unable to recover it. 00:33:16.761 [2024-04-17 10:29:49.803601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.803818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.803849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.761 qpair failed and we were unable to recover it. 00:33:16.761 [2024-04-17 10:29:49.804113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.804270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.804300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.761 qpair failed and we were unable to recover it. 00:33:16.761 [2024-04-17 10:29:49.804522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.804819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.804850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.761 qpair failed and we were unable to recover it. 00:33:16.761 [2024-04-17 10:29:49.805063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.805199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.805229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.761 qpair failed and we were unable to recover it. 00:33:16.761 [2024-04-17 10:29:49.805407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.805590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.805620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.761 qpair failed and we were unable to recover it. 00:33:16.761 [2024-04-17 10:29:49.805814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.806022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.806052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.761 qpair failed and we were unable to recover it. 00:33:16.761 [2024-04-17 10:29:49.806233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.806393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.806423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.761 qpair failed and we were unable to recover it. 00:33:16.761 [2024-04-17 10:29:49.806728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.806950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.806980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.761 qpair failed and we were unable to recover it. 00:33:16.761 [2024-04-17 10:29:49.807186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.807394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.807424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.761 qpair failed and we were unable to recover it. 00:33:16.761 [2024-04-17 10:29:49.807727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.808010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.808040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.761 qpair failed and we were unable to recover it. 00:33:16.761 [2024-04-17 10:29:49.808311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.808580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.808610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.761 qpair failed and we were unable to recover it. 00:33:16.761 [2024-04-17 10:29:49.808915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.809134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.809165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.761 qpair failed and we were unable to recover it. 00:33:16.761 [2024-04-17 10:29:49.809377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.809593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.809622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.761 qpair failed and we were unable to recover it. 00:33:16.761 [2024-04-17 10:29:49.809857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.810032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.810063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.761 qpair failed and we were unable to recover it. 00:33:16.761 [2024-04-17 10:29:49.810341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.810550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.810580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.761 qpair failed and we were unable to recover it. 00:33:16.761 [2024-04-17 10:29:49.810768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.810939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.810969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.761 qpair failed and we were unable to recover it. 00:33:16.761 [2024-04-17 10:29:49.811126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.811338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.811369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.761 qpair failed and we were unable to recover it. 00:33:16.761 [2024-04-17 10:29:49.811609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.811764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.811796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.761 qpair failed and we were unable to recover it. 00:33:16.761 [2024-04-17 10:29:49.812036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.812306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.812336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.761 qpair failed and we were unable to recover it. 00:33:16.761 [2024-04-17 10:29:49.812562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.812860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.761 [2024-04-17 10:29:49.812892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.761 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.813107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.813244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.813274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.762 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.813508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.813714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.813745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.762 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.814050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.814275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.814305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.762 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.814615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.814860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.814896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.762 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.815169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.815466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.815497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.762 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.815667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.815869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.815899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.762 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.816139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.816296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.816327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.762 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.816472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.816677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.816707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.762 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.816923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.817191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.817221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.762 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.817430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.817543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.817573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.762 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.817878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.818115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.818145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.762 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.818295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.818587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.818617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.762 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.818779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.818943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.818973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.762 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.819246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.819453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.819483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.762 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.819783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.820069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.820099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.762 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.820401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.820640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.820678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.762 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.820953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.821110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.821140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.762 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.821391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.821606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.821636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.762 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.821875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.822114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.822144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.762 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.822301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.822511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.822542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.762 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.822755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.822960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.822989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.762 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.823263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.823415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.823445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.762 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.823690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.823859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.823889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.762 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.824100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.824341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.824371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.762 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.824661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.824963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.824993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.762 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.825298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.825500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.825530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.762 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.825707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.825935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.825965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.762 qpair failed and we were unable to recover it. 00:33:16.762 [2024-04-17 10:29:49.826182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.826482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.762 [2024-04-17 10:29:49.826512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.763 qpair failed and we were unable to recover it. 00:33:16.763 [2024-04-17 10:29:49.826732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.826893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.826923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.763 qpair failed and we were unable to recover it. 00:33:16.763 [2024-04-17 10:29:49.827148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.827416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.827446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.763 qpair failed and we were unable to recover it. 00:33:16.763 [2024-04-17 10:29:49.827664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.827935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.827964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.763 qpair failed and we were unable to recover it. 00:33:16.763 [2024-04-17 10:29:49.828193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.828495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.828525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.763 qpair failed and we were unable to recover it. 00:33:16.763 [2024-04-17 10:29:49.828744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.828892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.828922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.763 qpair failed and we were unable to recover it. 00:33:16.763 [2024-04-17 10:29:49.829153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.829355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.829385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.763 qpair failed and we were unable to recover it. 00:33:16.763 [2024-04-17 10:29:49.829629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.829860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.829891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.763 qpair failed and we were unable to recover it. 00:33:16.763 [2024-04-17 10:29:49.830195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.830327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.830358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.763 qpair failed and we were unable to recover it. 00:33:16.763 [2024-04-17 10:29:49.830567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.830796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.830827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.763 qpair failed and we were unable to recover it. 00:33:16.763 [2024-04-17 10:29:49.830982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.831257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.831286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.763 qpair failed and we were unable to recover it. 00:33:16.763 [2024-04-17 10:29:49.831534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.831748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.831779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.763 qpair failed and we were unable to recover it. 00:33:16.763 [2024-04-17 10:29:49.832004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.832273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.832302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.763 qpair failed and we were unable to recover it. 00:33:16.763 [2024-04-17 10:29:49.832515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.832741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.832773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.763 qpair failed and we were unable to recover it. 00:33:16.763 [2024-04-17 10:29:49.833050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.833204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.833235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.763 qpair failed and we were unable to recover it. 00:33:16.763 [2024-04-17 10:29:49.833520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.833662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.833693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.763 qpair failed and we were unable to recover it. 00:33:16.763 [2024-04-17 10:29:49.833920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.834136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.834165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.763 qpair failed and we were unable to recover it. 00:33:16.763 [2024-04-17 10:29:49.834465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.834708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.834740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.763 qpair failed and we were unable to recover it. 00:33:16.763 [2024-04-17 10:29:49.834968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.835190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.835220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.763 qpair failed and we were unable to recover it. 00:33:16.763 [2024-04-17 10:29:49.835423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.835581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.835612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.763 qpair failed and we were unable to recover it. 00:33:16.763 [2024-04-17 10:29:49.836029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.836243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.836273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.763 qpair failed and we were unable to recover it. 00:33:16.763 [2024-04-17 10:29:49.836439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.836578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.836607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.763 qpair failed and we were unable to recover it. 00:33:16.763 [2024-04-17 10:29:49.836933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.837067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.837097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.763 qpair failed and we were unable to recover it. 00:33:16.763 [2024-04-17 10:29:49.837352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.837513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.837543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.763 qpair failed and we were unable to recover it. 00:33:16.763 [2024-04-17 10:29:49.837852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.838083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.838114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.763 qpair failed and we were unable to recover it. 00:33:16.763 [2024-04-17 10:29:49.838387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.838606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.838636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.763 qpair failed and we were unable to recover it. 00:33:16.763 [2024-04-17 10:29:49.838851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.839053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.763 [2024-04-17 10:29:49.839083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.763 qpair failed and we were unable to recover it. 00:33:16.764 [2024-04-17 10:29:49.839364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.764 [2024-04-17 10:29:49.839572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.764 [2024-04-17 10:29:49.839608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.764 qpair failed and we were unable to recover it. 00:33:16.764 [2024-04-17 10:29:49.839879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.764 [2024-04-17 10:29:49.840038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.764 [2024-04-17 10:29:49.840068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.764 qpair failed and we were unable to recover it. 00:33:16.764 [2024-04-17 10:29:49.840213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.764 [2024-04-17 10:29:49.840478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.764 [2024-04-17 10:29:49.840508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.764 qpair failed and we were unable to recover it. 00:33:16.764 [2024-04-17 10:29:49.840716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.764 [2024-04-17 10:29:49.841015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.764 [2024-04-17 10:29:49.841045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.764 qpair failed and we were unable to recover it. 00:33:16.764 [2024-04-17 10:29:49.841281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.764 [2024-04-17 10:29:49.841420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.764 [2024-04-17 10:29:49.841449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.765 qpair failed and we were unable to recover it. 00:33:16.765 [2024-04-17 10:29:49.841665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.765 [2024-04-17 10:29:49.841889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.765 [2024-04-17 10:29:49.841919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.765 qpair failed and we were unable to recover it. 00:33:16.765 [2024-04-17 10:29:49.842069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.765 [2024-04-17 10:29:49.842279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.765 [2024-04-17 10:29:49.842309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.765 qpair failed and we were unable to recover it. 00:33:16.765 [2024-04-17 10:29:49.842519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.765 [2024-04-17 10:29:49.842656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.765 [2024-04-17 10:29:49.842686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.765 qpair failed and we were unable to recover it. 00:33:16.765 [2024-04-17 10:29:49.842928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.765 [2024-04-17 10:29:49.843068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.765 [2024-04-17 10:29:49.843098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.765 qpair failed and we were unable to recover it. 00:33:16.765 [2024-04-17 10:29:49.843353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.765 [2024-04-17 10:29:49.843621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.765 [2024-04-17 10:29:49.843658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.765 qpair failed and we were unable to recover it. 00:33:16.765 [2024-04-17 10:29:49.843935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.765 [2024-04-17 10:29:49.844172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.765 [2024-04-17 10:29:49.844202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.765 qpair failed and we were unable to recover it. 00:33:16.765 [2024-04-17 10:29:49.844344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.765 [2024-04-17 10:29:49.844632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.765 [2024-04-17 10:29:49.844669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.765 qpair failed and we were unable to recover it. 00:33:16.765 [2024-04-17 10:29:49.844951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.765 [2024-04-17 10:29:49.845153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.765 [2024-04-17 10:29:49.845183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.765 qpair failed and we were unable to recover it. 00:33:16.765 [2024-04-17 10:29:49.845477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.845680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.845711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.845955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.846105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.846136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.846343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.846553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.846583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.846857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.847006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.847035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.847192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.847465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.847496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.847721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.847855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.847884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.848101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.848254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.848284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.848562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.848832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.848863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.849077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.849307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.849337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.849656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.849808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.849838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.850059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.850260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.850290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.850523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.850756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.850787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.851104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.851287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.851317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.851537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.851747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.851778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.851948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.852215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.852246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.852454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.852722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.852754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.853052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.853203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.853233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.853397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.853533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.853563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.853694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.853875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.853906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.854121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.854278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.854308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.854558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.854714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.854746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.854954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.855158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.855188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.855396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.855608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.855637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.855927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.856168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.856198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.856435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.856579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.856609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.856912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.857182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.857213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.857489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.857702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.857733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.858008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.858168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.858198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.858404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.858603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.766 [2024-04-17 10:29:49.858639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.766 qpair failed and we were unable to recover it. 00:33:16.766 [2024-04-17 10:29:49.858978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.859204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.859234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.767 qpair failed and we were unable to recover it. 00:33:16.767 [2024-04-17 10:29:49.859562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.859848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.859879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.767 qpair failed and we were unable to recover it. 00:33:16.767 [2024-04-17 10:29:49.860031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.860247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.860277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.767 qpair failed and we were unable to recover it. 00:33:16.767 [2024-04-17 10:29:49.860553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.860832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.860863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.767 qpair failed and we were unable to recover it. 00:33:16.767 [2024-04-17 10:29:49.861075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.861345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.861375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.767 qpair failed and we were unable to recover it. 00:33:16.767 [2024-04-17 10:29:49.861658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.861891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.861921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.767 qpair failed and we were unable to recover it. 00:33:16.767 [2024-04-17 10:29:49.862215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.862358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.862389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.767 qpair failed and we were unable to recover it. 00:33:16.767 [2024-04-17 10:29:49.862601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.862825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.862856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.767 qpair failed and we were unable to recover it. 00:33:16.767 [2024-04-17 10:29:49.863180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.863391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.863421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.767 qpair failed and we were unable to recover it. 00:33:16.767 [2024-04-17 10:29:49.863690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.863807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.863837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.767 qpair failed and we were unable to recover it. 00:33:16.767 [2024-04-17 10:29:49.864075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.864311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.864341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.767 qpair failed and we were unable to recover it. 00:33:16.767 [2024-04-17 10:29:49.864615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.864926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.864957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.767 qpair failed and we were unable to recover it. 00:33:16.767 [2024-04-17 10:29:49.865180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.865395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.865425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.767 qpair failed and we were unable to recover it. 00:33:16.767 [2024-04-17 10:29:49.865631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.865854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.865885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.767 qpair failed and we were unable to recover it. 00:33:16.767 [2024-04-17 10:29:49.866029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.866191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.866221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.767 qpair failed and we were unable to recover it. 00:33:16.767 [2024-04-17 10:29:49.866374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.866524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.866554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.767 qpair failed and we were unable to recover it. 00:33:16.767 [2024-04-17 10:29:49.866761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.866896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.866926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.767 qpair failed and we were unable to recover it. 00:33:16.767 [2024-04-17 10:29:49.867148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.867473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.867503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.767 qpair failed and we were unable to recover it. 00:33:16.767 [2024-04-17 10:29:49.867753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.867912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.867942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.767 qpair failed and we were unable to recover it. 00:33:16.767 [2024-04-17 10:29:49.868171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.868326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.868356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.767 qpair failed and we were unable to recover it. 00:33:16.767 [2024-04-17 10:29:49.868602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.868817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.868848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.767 qpair failed and we were unable to recover it. 00:33:16.767 [2024-04-17 10:29:49.869097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.869308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.869338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.767 qpair failed and we were unable to recover it. 00:33:16.767 [2024-04-17 10:29:49.869583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.869814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.869845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.767 qpair failed and we were unable to recover it. 00:33:16.767 [2024-04-17 10:29:49.870060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.870273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.870302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.767 qpair failed and we were unable to recover it. 00:33:16.767 [2024-04-17 10:29:49.870534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.870809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.870840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.767 qpair failed and we were unable to recover it. 00:33:16.767 [2024-04-17 10:29:49.871002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.871216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.871247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.767 qpair failed and we were unable to recover it. 00:33:16.767 [2024-04-17 10:29:49.871496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.871640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.871696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.767 qpair failed and we were unable to recover it. 00:33:16.767 [2024-04-17 10:29:49.871971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.767 [2024-04-17 10:29:49.872269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.872300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.872614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.872918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.872949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.873117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.873336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.873366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.873651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.873948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.873978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.874252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.874520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.874550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.874765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.875037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.875067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.875301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.875516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.875546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.875701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.875909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.875939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.876100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.876318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.876348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.876485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.876615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.876652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.876980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.877202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.877232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.877394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.877611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.877640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.877887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.878099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.878126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.878351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.878550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.878578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.878713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.879010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.879037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.879201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.879411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.879438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.879748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.879970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.879998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.880233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.880396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.880424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.880733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.880942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.880970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.881134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.881282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.881310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.881445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.881711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.881740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.881879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.882048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.882077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.882292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.882596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.882626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.882812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.883136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.883171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.883342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.883552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.883581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.883799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.884002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.884033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.884205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.884499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.884529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.884678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.884894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.884924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.885073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.885339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.885369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.768 qpair failed and we were unable to recover it. 00:33:16.768 [2024-04-17 10:29:49.885577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.768 [2024-04-17 10:29:49.885738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.885769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.769 [2024-04-17 10:29:49.885994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.886212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.886241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.769 [2024-04-17 10:29:49.886375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.886556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.886594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.769 [2024-04-17 10:29:49.886919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.887083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.887114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.769 [2024-04-17 10:29:49.887344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.887508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.887539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.769 [2024-04-17 10:29:49.887770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.887991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.888021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.769 [2024-04-17 10:29:49.888233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.888346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.888357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.769 [2024-04-17 10:29:49.888649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.888761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.888772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.769 [2024-04-17 10:29:49.888944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.889104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.889115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.769 [2024-04-17 10:29:49.889349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.889460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.889471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.769 [2024-04-17 10:29:49.889697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.889811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.889823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.769 [2024-04-17 10:29:49.889999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.890199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.890209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.769 [2024-04-17 10:29:49.890386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.890522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.890532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.769 [2024-04-17 10:29:49.890693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.890864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.890875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.769 [2024-04-17 10:29:49.891054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.891334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.891367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.769 [2024-04-17 10:29:49.891519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.891668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.891699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.769 [2024-04-17 10:29:49.891840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.892043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.892072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.769 [2024-04-17 10:29:49.892273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.892474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.892484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.769 [2024-04-17 10:29:49.892585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.892844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.892855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.769 [2024-04-17 10:29:49.893047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.893211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.893222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.769 [2024-04-17 10:29:49.893490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.893635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.893676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.769 [2024-04-17 10:29:49.893932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.894086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.894116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.769 [2024-04-17 10:29:49.894264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.894481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.894491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.769 [2024-04-17 10:29:49.894668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.894837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.894848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.769 [2024-04-17 10:29:49.895024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.895260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.895290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.769 [2024-04-17 10:29:49.895463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.895759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.895790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.769 [2024-04-17 10:29:49.895954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.896093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.896123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.769 [2024-04-17 10:29:49.896279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.896548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.769 [2024-04-17 10:29:49.896578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.769 qpair failed and we were unable to recover it. 00:33:16.770 [2024-04-17 10:29:49.896820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.896981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.897011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.770 qpair failed and we were unable to recover it. 00:33:16.770 [2024-04-17 10:29:49.897166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.897383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.897413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.770 qpair failed and we were unable to recover it. 00:33:16.770 [2024-04-17 10:29:49.897630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.897843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.897873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.770 qpair failed and we were unable to recover it. 00:33:16.770 [2024-04-17 10:29:49.898127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.898326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.898356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.770 qpair failed and we were unable to recover it. 00:33:16.770 [2024-04-17 10:29:49.898590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.898749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.898779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.770 qpair failed and we were unable to recover it. 00:33:16.770 [2024-04-17 10:29:49.899002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.899221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.899251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.770 qpair failed and we were unable to recover it. 00:33:16.770 [2024-04-17 10:29:49.899525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.899632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.899648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.770 qpair failed and we were unable to recover it. 00:33:16.770 [2024-04-17 10:29:49.899748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.899908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.899918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.770 qpair failed and we were unable to recover it. 00:33:16.770 [2024-04-17 10:29:49.900032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.900197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.900207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.770 qpair failed and we were unable to recover it. 00:33:16.770 [2024-04-17 10:29:49.900389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.900485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.900495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.770 qpair failed and we were unable to recover it. 00:33:16.770 [2024-04-17 10:29:49.900576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.900754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.900765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.770 qpair failed and we were unable to recover it. 00:33:16.770 [2024-04-17 10:29:49.900960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.901068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.901079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.770 qpair failed and we were unable to recover it. 00:33:16.770 [2024-04-17 10:29:49.901338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.901429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.901439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.770 qpair failed and we were unable to recover it. 00:33:16.770 [2024-04-17 10:29:49.901558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.901677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.901688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.770 qpair failed and we were unable to recover it. 00:33:16.770 [2024-04-17 10:29:49.901780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.901962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.901972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.770 qpair failed and we were unable to recover it. 00:33:16.770 [2024-04-17 10:29:49.902085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.902257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.902271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.770 qpair failed and we were unable to recover it. 00:33:16.770 [2024-04-17 10:29:49.902446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.902617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.902628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.770 qpair failed and we were unable to recover it. 00:33:16.770 [2024-04-17 10:29:49.902777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.902983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.903012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.770 qpair failed and we were unable to recover it. 00:33:16.770 [2024-04-17 10:29:49.903259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.770 [2024-04-17 10:29:49.903526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.903565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.771 qpair failed and we were unable to recover it. 00:33:16.771 [2024-04-17 10:29:49.903650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.903760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.903769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.771 qpair failed and we were unable to recover it. 00:33:16.771 [2024-04-17 10:29:49.903866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.904106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.904116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.771 qpair failed and we were unable to recover it. 00:33:16.771 [2024-04-17 10:29:49.904236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.904419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.904450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.771 qpair failed and we were unable to recover it. 00:33:16.771 [2024-04-17 10:29:49.904666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.904911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.904941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.771 qpair failed and we were unable to recover it. 00:33:16.771 [2024-04-17 10:29:49.905158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.905369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.905399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.771 qpair failed and we were unable to recover it. 00:33:16.771 [2024-04-17 10:29:49.905536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.905725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.905736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.771 qpair failed and we were unable to recover it. 00:33:16.771 [2024-04-17 10:29:49.905988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.906105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.906134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.771 qpair failed and we were unable to recover it. 00:33:16.771 [2024-04-17 10:29:49.906301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.906508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.906538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.771 qpair failed and we were unable to recover it. 00:33:16.771 [2024-04-17 10:29:49.906688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.906844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.906874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.771 qpair failed and we were unable to recover it. 00:33:16.771 [2024-04-17 10:29:49.907036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.907303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.907334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.771 qpair failed and we were unable to recover it. 00:33:16.771 [2024-04-17 10:29:49.907580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.907741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.907773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.771 qpair failed and we were unable to recover it. 00:33:16.771 [2024-04-17 10:29:49.907963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.908133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.908143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.771 qpair failed and we were unable to recover it. 00:33:16.771 [2024-04-17 10:29:49.908398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.908495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.908506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.771 qpair failed and we were unable to recover it. 00:33:16.771 [2024-04-17 10:29:49.908679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.908937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.908948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.771 qpair failed and we were unable to recover it. 00:33:16.771 [2024-04-17 10:29:49.909062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.909243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.909253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.771 qpair failed and we were unable to recover it. 00:33:16.771 [2024-04-17 10:29:49.909458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.909628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.909668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.771 qpair failed and we were unable to recover it. 00:33:16.771 [2024-04-17 10:29:49.909947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.910168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.910198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.771 qpair failed and we were unable to recover it. 00:33:16.771 [2024-04-17 10:29:49.910411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.910532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.910542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.771 qpair failed and we were unable to recover it. 00:33:16.771 [2024-04-17 10:29:49.910640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.910847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.771 [2024-04-17 10:29:49.910867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.771 qpair failed and we were unable to recover it. 00:33:16.771 [2024-04-17 10:29:49.911070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.911238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.911269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.772 qpair failed and we were unable to recover it. 00:33:16.772 [2024-04-17 10:29:49.911502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.911742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.911774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.772 qpair failed and we were unable to recover it. 00:33:16.772 [2024-04-17 10:29:49.911996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.912188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.912198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.772 qpair failed and we were unable to recover it. 00:33:16.772 [2024-04-17 10:29:49.912301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.912532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.912543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.772 qpair failed and we were unable to recover it. 00:33:16.772 [2024-04-17 10:29:49.912778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.913012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.913022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.772 qpair failed and we were unable to recover it. 00:33:16.772 [2024-04-17 10:29:49.913293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.913455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.913465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.772 qpair failed and we were unable to recover it. 00:33:16.772 [2024-04-17 10:29:49.913676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.913783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.913812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.772 qpair failed and we were unable to recover it. 00:33:16.772 [2024-04-17 10:29:49.914042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.914304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.914314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.772 qpair failed and we were unable to recover it. 00:33:16.772 [2024-04-17 10:29:49.914439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.914671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.914683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.772 qpair failed and we were unable to recover it. 00:33:16.772 [2024-04-17 10:29:49.914778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.914881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.914891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.772 qpair failed and we were unable to recover it. 00:33:16.772 [2024-04-17 10:29:49.915054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.915172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.915182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.772 qpair failed and we were unable to recover it. 00:33:16.772 [2024-04-17 10:29:49.915414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.915522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.915532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.772 qpair failed and we were unable to recover it. 00:33:16.772 [2024-04-17 10:29:49.915635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.915799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.915811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.772 qpair failed and we were unable to recover it. 00:33:16.772 [2024-04-17 10:29:49.915903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.916103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.916114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.772 qpair failed and we were unable to recover it. 00:33:16.772 [2024-04-17 10:29:49.916270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.916435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.916445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.772 qpair failed and we were unable to recover it. 00:33:16.772 [2024-04-17 10:29:49.916598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.916700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.916711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.772 qpair failed and we were unable to recover it. 00:33:16.772 [2024-04-17 10:29:49.916883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.916982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.916991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.772 qpair failed and we were unable to recover it. 00:33:16.772 [2024-04-17 10:29:49.917252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.917396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.917406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.772 qpair failed and we were unable to recover it. 00:33:16.772 [2024-04-17 10:29:49.917577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.917739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.917751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.772 qpair failed and we were unable to recover it. 00:33:16.772 [2024-04-17 10:29:49.917857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.918037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.918049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.772 qpair failed and we were unable to recover it. 00:33:16.772 [2024-04-17 10:29:49.918160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.918275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.918286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.772 qpair failed and we were unable to recover it. 00:33:16.772 [2024-04-17 10:29:49.918384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.918491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.918501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.772 qpair failed and we were unable to recover it. 00:33:16.772 [2024-04-17 10:29:49.918666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.918897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.918908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.772 qpair failed and we were unable to recover it. 00:33:16.772 [2024-04-17 10:29:49.919104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.919373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.919406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.772 qpair failed and we were unable to recover it. 00:33:16.772 [2024-04-17 10:29:49.919562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.919693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.772 [2024-04-17 10:29:49.919723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.772 qpair failed and we were unable to recover it. 00:33:16.773 [2024-04-17 10:29:49.919893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.920185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.920218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.773 qpair failed and we were unable to recover it. 00:33:16.773 [2024-04-17 10:29:49.920365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.920509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.920540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.773 qpair failed and we were unable to recover it. 00:33:16.773 [2024-04-17 10:29:49.920758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.920984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.921013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.773 qpair failed and we were unable to recover it. 00:33:16.773 [2024-04-17 10:29:49.921239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.921508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.921542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.773 qpair failed and we were unable to recover it. 00:33:16.773 [2024-04-17 10:29:49.921706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.921985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.921996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.773 qpair failed and we were unable to recover it. 00:33:16.773 [2024-04-17 10:29:49.922260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.922379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.922390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.773 qpair failed and we were unable to recover it. 00:33:16.773 [2024-04-17 10:29:49.922480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.922638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.922653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.773 qpair failed and we were unable to recover it. 00:33:16.773 [2024-04-17 10:29:49.922815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.922931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.922942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.773 qpair failed and we were unable to recover it. 00:33:16.773 [2024-04-17 10:29:49.923105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.923227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.923239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.773 qpair failed and we were unable to recover it. 00:33:16.773 [2024-04-17 10:29:49.923432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.923529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.923558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.773 qpair failed and we were unable to recover it. 00:33:16.773 [2024-04-17 10:29:49.923714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.923831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.923864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.773 qpair failed and we were unable to recover it. 00:33:16.773 [2024-04-17 10:29:49.924094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.924253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.924264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.773 qpair failed and we were unable to recover it. 00:33:16.773 [2024-04-17 10:29:49.924379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.924567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.924598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.773 qpair failed and we were unable to recover it. 00:33:16.773 [2024-04-17 10:29:49.924841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.924994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.925025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.773 qpair failed and we were unable to recover it. 00:33:16.773 [2024-04-17 10:29:49.925361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.925593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.925604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.773 qpair failed and we were unable to recover it. 00:33:16.773 [2024-04-17 10:29:49.925689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.925804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.925815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.773 qpair failed and we were unable to recover it. 00:33:16.773 [2024-04-17 10:29:49.925995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.926256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.926267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.773 qpair failed and we were unable to recover it. 00:33:16.773 [2024-04-17 10:29:49.926456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.926579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.926590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.773 qpair failed and we were unable to recover it. 00:33:16.773 [2024-04-17 10:29:49.926755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.927021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.927032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.773 qpair failed and we were unable to recover it. 00:33:16.773 [2024-04-17 10:29:49.927139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.927243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.927254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.773 qpair failed and we were unable to recover it. 00:33:16.773 [2024-04-17 10:29:49.927425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.927689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.927701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.773 qpair failed and we were unable to recover it. 00:33:16.773 [2024-04-17 10:29:49.927883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.928076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.928105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.773 qpair failed and we were unable to recover it. 00:33:16.773 [2024-04-17 10:29:49.928262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.928558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.928591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.773 qpair failed and we were unable to recover it. 00:33:16.773 [2024-04-17 10:29:49.928859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.929016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.929046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.773 qpair failed and we were unable to recover it. 00:33:16.773 [2024-04-17 10:29:49.929281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.929382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.929392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.773 qpair failed and we were unable to recover it. 00:33:16.773 [2024-04-17 10:29:49.929593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.929820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.773 [2024-04-17 10:29:49.929850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.773 qpair failed and we were unable to recover it. 00:33:16.773 [2024-04-17 10:29:49.930077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.930225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.930235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.930410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.930608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.930618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.930874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.931080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.931090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.931196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.931454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.931485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.931699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.931971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.932001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.932311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.932411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.932421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.932607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.932857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.932887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.933115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.933325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.933336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.933503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.933765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.933802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.934026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.934230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.934260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.934479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.934582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.934592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.934851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.935015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.935026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.935280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.935469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.935480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.935565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.935728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.935741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.935983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.936099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.936128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.936306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.936507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.936539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.936844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.937065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.937098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.937408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.937558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.937588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.937768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.938086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.938126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.938366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.938596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.938628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.938878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.939056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.939067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.939229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.939351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.939362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.939594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.939773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.939784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.939970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.940217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.940227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.940400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.940523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.940532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.940634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.940798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.940808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.940919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.941120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.941130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.941433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.941564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.941593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.774 [2024-04-17 10:29:49.941767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.941978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.774 [2024-04-17 10:29:49.942019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.774 qpair failed and we were unable to recover it. 00:33:16.775 [2024-04-17 10:29:49.942155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.942444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.942454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.775 qpair failed and we were unable to recover it. 00:33:16.775 [2024-04-17 10:29:49.942708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.942964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.942975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.775 qpair failed and we were unable to recover it. 00:33:16.775 [2024-04-17 10:29:49.943103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.943280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.943291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.775 qpair failed and we were unable to recover it. 00:33:16.775 [2024-04-17 10:29:49.943504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.943683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.943694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.775 qpair failed and we were unable to recover it. 00:33:16.775 [2024-04-17 10:29:49.943868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.944031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.944042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.775 qpair failed and we were unable to recover it. 00:33:16.775 [2024-04-17 10:29:49.944297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.944416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.944444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.775 qpair failed and we were unable to recover it. 00:33:16.775 [2024-04-17 10:29:49.944726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.944877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.944906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.775 qpair failed and we were unable to recover it. 00:33:16.775 [2024-04-17 10:29:49.945032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.945349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.945380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.775 qpair failed and we were unable to recover it. 00:33:16.775 [2024-04-17 10:29:49.945606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.945836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.945847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.775 qpair failed and we were unable to recover it. 00:33:16.775 [2024-04-17 10:29:49.945954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.946152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.946163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.775 qpair failed and we were unable to recover it. 00:33:16.775 [2024-04-17 10:29:49.946379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.946514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.946542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.775 qpair failed and we were unable to recover it. 00:33:16.775 [2024-04-17 10:29:49.946766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.946922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.946952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.775 qpair failed and we were unable to recover it. 00:33:16.775 [2024-04-17 10:29:49.947252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.947393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.947423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.775 qpair failed and we were unable to recover it. 00:33:16.775 [2024-04-17 10:29:49.947655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.947860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.947890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.775 qpair failed and we were unable to recover it. 00:33:16.775 [2024-04-17 10:29:49.948196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.948472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.948502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.775 qpair failed and we were unable to recover it. 00:33:16.775 [2024-04-17 10:29:49.948729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.948998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.949028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.775 qpair failed and we were unable to recover it. 00:33:16.775 [2024-04-17 10:29:49.949278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.949524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.949554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.775 qpair failed and we were unable to recover it. 00:33:16.775 [2024-04-17 10:29:49.949698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.949954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.949965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.775 qpair failed and we were unable to recover it. 00:33:16.775 [2024-04-17 10:29:49.950076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.950268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.950278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.775 qpair failed and we were unable to recover it. 00:33:16.775 [2024-04-17 10:29:49.950455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.950685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.775 [2024-04-17 10:29:49.950696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.775 qpair failed and we were unable to recover it. 00:33:16.776 [2024-04-17 10:29:49.950963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.951081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.951091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.776 qpair failed and we were unable to recover it. 00:33:16.776 [2024-04-17 10:29:49.951267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.951387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.951397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.776 qpair failed and we were unable to recover it. 00:33:16.776 [2024-04-17 10:29:49.951556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.951719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.951729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.776 qpair failed and we were unable to recover it. 00:33:16.776 [2024-04-17 10:29:49.951829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.952001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.952011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.776 qpair failed and we were unable to recover it. 00:33:16.776 [2024-04-17 10:29:49.952243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.952420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.952431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.776 qpair failed and we were unable to recover it. 00:33:16.776 [2024-04-17 10:29:49.952522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.952721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.952732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.776 qpair failed and we were unable to recover it. 00:33:16.776 [2024-04-17 10:29:49.952849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.952941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.952951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.776 qpair failed and we were unable to recover it. 00:33:16.776 [2024-04-17 10:29:49.953149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.953297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.953327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.776 qpair failed and we were unable to recover it. 00:33:16.776 [2024-04-17 10:29:49.953475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.953736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.953747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.776 qpair failed and we were unable to recover it. 00:33:16.776 [2024-04-17 10:29:49.953856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.954043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.954072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.776 qpair failed and we were unable to recover it. 00:33:16.776 [2024-04-17 10:29:49.954357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.954564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.954594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.776 qpair failed and we were unable to recover it. 00:33:16.776 [2024-04-17 10:29:49.954814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.955037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.955068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.776 qpair failed and we were unable to recover it. 00:33:16.776 [2024-04-17 10:29:49.955272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.955431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.955461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.776 qpair failed and we were unable to recover it. 00:33:16.776 [2024-04-17 10:29:49.955671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.955952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.955962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.776 qpair failed and we were unable to recover it. 00:33:16.776 [2024-04-17 10:29:49.956153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.956326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.956336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.776 qpair failed and we were unable to recover it. 00:33:16.776 [2024-04-17 10:29:49.956592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.956701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.956712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.776 qpair failed and we were unable to recover it. 00:33:16.776 [2024-04-17 10:29:49.956822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.956981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.956991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.776 qpair failed and we were unable to recover it. 00:33:16.776 [2024-04-17 10:29:49.957183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.957368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.957378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.776 qpair failed and we were unable to recover it. 00:33:16.776 [2024-04-17 10:29:49.957609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.957700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.957711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.776 qpair failed and we were unable to recover it. 00:33:16.776 [2024-04-17 10:29:49.957870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.957984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.957995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.776 qpair failed and we were unable to recover it. 00:33:16.776 [2024-04-17 10:29:49.958176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.958487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.958516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.776 qpair failed and we were unable to recover it. 00:33:16.776 [2024-04-17 10:29:49.958789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.959067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.959097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.776 qpair failed and we were unable to recover it. 00:33:16.776 [2024-04-17 10:29:49.959374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.959576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.959606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.776 qpair failed and we were unable to recover it. 00:33:16.776 [2024-04-17 10:29:49.959718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.959884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.959895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.776 qpair failed and we were unable to recover it. 00:33:16.776 [2024-04-17 10:29:49.960092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.960265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.960294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.776 qpair failed and we were unable to recover it. 00:33:16.776 [2024-04-17 10:29:49.960459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.960702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.960733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.776 qpair failed and we were unable to recover it. 00:33:16.776 [2024-04-17 10:29:49.960963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.961166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.776 [2024-04-17 10:29:49.961195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.776 qpair failed and we were unable to recover it. 00:33:16.777 [2024-04-17 10:29:49.961407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.961625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.961635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.777 qpair failed and we were unable to recover it. 00:33:16.777 [2024-04-17 10:29:49.961744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.961912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.961923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.777 qpair failed and we were unable to recover it. 00:33:16.777 [2024-04-17 10:29:49.962083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.962183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.962193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.777 qpair failed and we were unable to recover it. 00:33:16.777 [2024-04-17 10:29:49.962370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.962550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.962561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.777 qpair failed and we were unable to recover it. 00:33:16.777 [2024-04-17 10:29:49.962747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.962927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.962937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.777 qpair failed and we were unable to recover it. 00:33:16.777 [2024-04-17 10:29:49.963126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.963223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.963234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.777 qpair failed and we were unable to recover it. 00:33:16.777 [2024-04-17 10:29:49.963353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.963610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.963621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.777 qpair failed and we were unable to recover it. 00:33:16.777 [2024-04-17 10:29:49.963745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.963874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.963883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.777 qpair failed and we were unable to recover it. 00:33:16.777 [2024-04-17 10:29:49.963988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.964094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.964104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.777 qpair failed and we were unable to recover it. 00:33:16.777 [2024-04-17 10:29:49.964200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.964376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.964405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.777 qpair failed and we were unable to recover it. 00:33:16.777 [2024-04-17 10:29:49.964614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.964784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.964816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.777 qpair failed and we were unable to recover it. 00:33:16.777 [2024-04-17 10:29:49.965070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.965219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.965261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.777 qpair failed and we were unable to recover it. 00:33:16.777 [2024-04-17 10:29:49.965446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.965551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.965561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.777 qpair failed and we were unable to recover it. 00:33:16.777 [2024-04-17 10:29:49.965774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.965988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.966017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.777 qpair failed and we were unable to recover it. 00:33:16.777 [2024-04-17 10:29:49.966168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.966368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.966398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.777 qpair failed and we were unable to recover it. 00:33:16.777 [2024-04-17 10:29:49.966627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.966778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.966809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.777 qpair failed and we were unable to recover it. 00:33:16.777 [2024-04-17 10:29:49.967020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.967175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.967204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.777 qpair failed and we were unable to recover it. 00:33:16.777 [2024-04-17 10:29:49.967423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.967550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.967560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.777 qpair failed and we were unable to recover it. 00:33:16.777 [2024-04-17 10:29:49.967678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.967891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.967901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.777 qpair failed and we were unable to recover it. 00:33:16.777 [2024-04-17 10:29:49.968068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.968263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.968273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.777 qpair failed and we were unable to recover it. 00:33:16.777 [2024-04-17 10:29:49.968438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.968604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.968614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.777 qpair failed and we were unable to recover it. 00:33:16.777 [2024-04-17 10:29:49.968713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.968828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.968837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.777 qpair failed and we were unable to recover it. 00:33:16.777 [2024-04-17 10:29:49.968997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.969179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.969190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.777 qpair failed and we were unable to recover it. 00:33:16.777 [2024-04-17 10:29:49.969376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.969553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.969584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.777 qpair failed and we were unable to recover it. 00:33:16.777 [2024-04-17 10:29:49.969742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.969980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.970010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.777 qpair failed and we were unable to recover it. 00:33:16.777 [2024-04-17 10:29:49.970248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.970474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.970513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.777 qpair failed and we were unable to recover it. 00:33:16.777 [2024-04-17 10:29:49.970715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.970823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.777 [2024-04-17 10:29:49.970834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.777 qpair failed and we were unable to recover it. 00:33:16.778 [2024-04-17 10:29:49.970996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.971214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.971243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.778 qpair failed and we were unable to recover it. 00:33:16.778 [2024-04-17 10:29:49.971466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.971703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.971714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.778 qpair failed and we were unable to recover it. 00:33:16.778 [2024-04-17 10:29:49.971888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.972060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.972090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.778 qpair failed and we were unable to recover it. 00:33:16.778 [2024-04-17 10:29:49.972395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.972549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.972590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.778 qpair failed and we were unable to recover it. 00:33:16.778 [2024-04-17 10:29:49.972788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.972993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.973022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.778 qpair failed and we were unable to recover it. 00:33:16.778 [2024-04-17 10:29:49.973247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.973410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.973440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.778 qpair failed and we were unable to recover it. 00:33:16.778 [2024-04-17 10:29:49.973793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.974000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.974037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.778 qpair failed and we were unable to recover it. 00:33:16.778 [2024-04-17 10:29:49.974181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.974370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.974400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.778 qpair failed and we were unable to recover it. 00:33:16.778 [2024-04-17 10:29:49.974541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.974797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.974831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.778 qpair failed and we were unable to recover it. 00:33:16.778 [2024-04-17 10:29:49.975008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.975152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.975182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:16.778 qpair failed and we were unable to recover it. 00:33:16.778 [2024-04-17 10:29:49.975329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.975508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.975538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.778 qpair failed and we were unable to recover it. 00:33:16.778 [2024-04-17 10:29:49.975783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.975988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.976020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.778 qpair failed and we were unable to recover it. 00:33:16.778 [2024-04-17 10:29:49.976286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.976590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.976620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.778 qpair failed and we were unable to recover it. 00:33:16.778 [2024-04-17 10:29:49.976843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.977086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.977116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.778 qpair failed and we were unable to recover it. 00:33:16.778 [2024-04-17 10:29:49.977337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.977539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.977569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.778 qpair failed and we were unable to recover it. 00:33:16.778 [2024-04-17 10:29:49.977769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.977947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.977957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.778 qpair failed and we were unable to recover it. 00:33:16.778 [2024-04-17 10:29:49.978080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.978196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.978206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.778 qpair failed and we were unable to recover it. 00:33:16.778 [2024-04-17 10:29:49.978389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.978629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.978639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.778 qpair failed and we were unable to recover it. 00:33:16.778 [2024-04-17 10:29:49.978747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.978908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.978919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.778 qpair failed and we were unable to recover it. 00:33:16.778 [2024-04-17 10:29:49.979089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.979250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.979260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.778 qpair failed and we were unable to recover it. 00:33:16.778 [2024-04-17 10:29:49.979385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.979495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.979505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.778 qpair failed and we were unable to recover it. 00:33:16.778 [2024-04-17 10:29:49.979685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.979858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.979869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.778 qpair failed and we were unable to recover it. 00:33:16.778 [2024-04-17 10:29:49.980043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.980141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.980151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.778 qpair failed and we were unable to recover it. 00:33:16.778 [2024-04-17 10:29:49.980321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.980527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.980555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.778 qpair failed and we were unable to recover it. 00:33:16.778 [2024-04-17 10:29:49.980710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.980844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.980879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.778 qpair failed and we were unable to recover it. 00:33:16.778 [2024-04-17 10:29:49.981145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.981309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.981338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.778 qpair failed and we were unable to recover it. 00:33:16.778 [2024-04-17 10:29:49.981554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.778 [2024-04-17 10:29:49.981733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.981744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.779 qpair failed and we were unable to recover it. 00:33:16.779 [2024-04-17 10:29:49.981853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.982108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.982118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.779 qpair failed and we were unable to recover it. 00:33:16.779 [2024-04-17 10:29:49.982234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.982396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.982407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.779 qpair failed and we were unable to recover it. 00:33:16.779 [2024-04-17 10:29:49.982507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.982681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.982694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.779 qpair failed and we were unable to recover it. 00:33:16.779 [2024-04-17 10:29:49.982885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.983079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.983109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.779 qpair failed and we were unable to recover it. 00:33:16.779 [2024-04-17 10:29:49.983321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.983466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.983496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.779 qpair failed and we were unable to recover it. 00:33:16.779 [2024-04-17 10:29:49.983712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.983844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.983872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.779 qpair failed and we were unable to recover it. 00:33:16.779 [2024-04-17 10:29:49.984106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.984315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.984344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.779 qpair failed and we were unable to recover it. 00:33:16.779 [2024-04-17 10:29:49.984641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.984885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.984918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.779 qpair failed and we were unable to recover it. 00:33:16.779 [2024-04-17 10:29:49.985086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.985227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.985258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.779 qpair failed and we were unable to recover it. 00:33:16.779 [2024-04-17 10:29:49.985414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.985629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.985671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.779 qpair failed and we were unable to recover it. 00:33:16.779 [2024-04-17 10:29:49.985946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.986072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.986082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.779 qpair failed and we were unable to recover it. 00:33:16.779 [2024-04-17 10:29:49.986269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.986435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.986444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.779 qpair failed and we were unable to recover it. 00:33:16.779 [2024-04-17 10:29:49.986622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.986789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.986801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.779 qpair failed and we were unable to recover it. 00:33:16.779 [2024-04-17 10:29:49.986904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.987074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.987084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.779 qpair failed and we were unable to recover it. 00:33:16.779 [2024-04-17 10:29:49.987209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.987371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.987382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.779 qpair failed and we were unable to recover it. 00:33:16.779 [2024-04-17 10:29:49.987500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.987680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.987702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.779 qpair failed and we were unable to recover it. 00:33:16.779 [2024-04-17 10:29:49.987900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.988096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.988107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.779 qpair failed and we were unable to recover it. 00:33:16.779 [2024-04-17 10:29:49.988288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.988570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.988611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.779 qpair failed and we were unable to recover it. 00:33:16.779 [2024-04-17 10:29:49.988920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.989138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.989168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.779 qpair failed and we were unable to recover it. 00:33:16.779 [2024-04-17 10:29:49.989300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.989543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.989580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.779 qpair failed and we were unable to recover it. 00:33:16.779 [2024-04-17 10:29:49.989809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.989972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.990001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.779 qpair failed and we were unable to recover it. 00:33:16.779 [2024-04-17 10:29:49.990186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.990389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.990426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.779 qpair failed and we were unable to recover it. 00:33:16.779 [2024-04-17 10:29:49.990597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.990762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.990774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.779 qpair failed and we were unable to recover it. 00:33:16.779 [2024-04-17 10:29:49.990949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.991248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.991278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.779 qpair failed and we were unable to recover it. 00:33:16.779 [2024-04-17 10:29:49.991506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.991807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.991839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.779 qpair failed and we were unable to recover it. 00:33:16.779 [2024-04-17 10:29:49.991996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.992149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.992179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.779 qpair failed and we were unable to recover it. 00:33:16.779 [2024-04-17 10:29:49.992432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.992585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.779 [2024-04-17 10:29:49.992614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.780 qpair failed and we were unable to recover it. 00:33:16.780 [2024-04-17 10:29:49.992848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.993009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.993039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.780 qpair failed and we were unable to recover it. 00:33:16.780 [2024-04-17 10:29:49.993245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.993443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.993473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.780 qpair failed and we were unable to recover it. 00:33:16.780 [2024-04-17 10:29:49.993775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.994018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.994030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.780 qpair failed and we were unable to recover it. 00:33:16.780 [2024-04-17 10:29:49.994278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.994378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.994388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.780 qpair failed and we were unable to recover it. 00:33:16.780 [2024-04-17 10:29:49.994564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.994753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.994763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.780 qpair failed and we were unable to recover it. 00:33:16.780 [2024-04-17 10:29:49.994890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.994999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.995010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.780 qpair failed and we were unable to recover it. 00:33:16.780 [2024-04-17 10:29:49.995189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.995307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.995317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.780 qpair failed and we were unable to recover it. 00:33:16.780 [2024-04-17 10:29:49.995403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.995575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.995585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.780 qpair failed and we were unable to recover it. 00:33:16.780 [2024-04-17 10:29:49.995815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.995928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.995938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.780 qpair failed and we were unable to recover it. 00:33:16.780 [2024-04-17 10:29:49.996123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.996293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.996303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.780 qpair failed and we were unable to recover it. 00:33:16.780 [2024-04-17 10:29:49.996419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.996548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.996577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.780 qpair failed and we were unable to recover it. 00:33:16.780 [2024-04-17 10:29:49.996732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.996938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.996969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.780 qpair failed and we were unable to recover it. 00:33:16.780 [2024-04-17 10:29:49.997186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.997395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.997407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.780 qpair failed and we were unable to recover it. 00:33:16.780 [2024-04-17 10:29:49.997567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.997757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.997788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.780 qpair failed and we were unable to recover it. 00:33:16.780 [2024-04-17 10:29:49.997994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.998149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.998178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.780 qpair failed and we were unable to recover it. 00:33:16.780 [2024-04-17 10:29:49.998461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.998571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.998581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.780 qpair failed and we were unable to recover it. 00:33:16.780 [2024-04-17 10:29:49.998706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.998823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.998852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.780 qpair failed and we were unable to recover it. 00:33:16.780 [2024-04-17 10:29:49.999073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.999232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.999262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.780 qpair failed and we were unable to recover it. 00:33:16.780 [2024-04-17 10:29:49.999414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.999596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:49.999625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.780 qpair failed and we were unable to recover it. 00:33:16.780 [2024-04-17 10:29:49.999780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:50.000039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:50.000049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.780 qpair failed and we were unable to recover it. 00:33:16.780 [2024-04-17 10:29:50.000279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:50.000458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:50.000468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.780 qpair failed and we were unable to recover it. 00:33:16.780 [2024-04-17 10:29:50.000705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:50.000875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.780 [2024-04-17 10:29:50.000887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.780 qpair failed and we were unable to recover it. 00:33:16.781 [2024-04-17 10:29:50.001010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.001116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.001133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.781 qpair failed and we were unable to recover it. 00:33:16.781 [2024-04-17 10:29:50.001321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.001499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.001509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.781 qpair failed and we were unable to recover it. 00:33:16.781 [2024-04-17 10:29:50.001619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.001805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.001816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.781 qpair failed and we were unable to recover it. 00:33:16.781 [2024-04-17 10:29:50.002000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.002173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.002183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.781 qpair failed and we were unable to recover it. 00:33:16.781 [2024-04-17 10:29:50.002295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.002436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.002447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.781 qpair failed and we were unable to recover it. 00:33:16.781 [2024-04-17 10:29:50.002585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.002748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.002760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.781 qpair failed and we were unable to recover it. 00:33:16.781 [2024-04-17 10:29:50.002965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.003079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.003090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.781 qpair failed and we were unable to recover it. 00:33:16.781 [2024-04-17 10:29:50.003269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.003441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.003451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.781 qpair failed and we were unable to recover it. 00:33:16.781 [2024-04-17 10:29:50.003628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.003800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.003811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.781 qpair failed and we were unable to recover it. 00:33:16.781 [2024-04-17 10:29:50.004042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.004151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.004161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.781 qpair failed and we were unable to recover it. 00:33:16.781 [2024-04-17 10:29:50.004357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.004523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.004533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.781 qpair failed and we were unable to recover it. 00:33:16.781 [2024-04-17 10:29:50.004715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.004799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.004810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.781 qpair failed and we were unable to recover it. 00:33:16.781 [2024-04-17 10:29:50.005020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.005114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.005124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.781 qpair failed and we were unable to recover it. 00:33:16.781 [2024-04-17 10:29:50.005244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.005441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.005451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.781 qpair failed and we were unable to recover it. 00:33:16.781 [2024-04-17 10:29:50.005582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.005787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.005798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.781 qpair failed and we were unable to recover it. 00:33:16.781 [2024-04-17 10:29:50.006002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.006121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.006131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.781 qpair failed and we were unable to recover it. 00:33:16.781 [2024-04-17 10:29:50.006239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.006346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.006356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.781 qpair failed and we were unable to recover it. 00:33:16.781 [2024-04-17 10:29:50.006479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.006575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.006586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.781 qpair failed and we were unable to recover it. 00:33:16.781 [2024-04-17 10:29:50.006747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.006925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.006936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.781 qpair failed and we were unable to recover it. 00:33:16.781 [2024-04-17 10:29:50.007195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.007381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.007391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.781 qpair failed and we were unable to recover it. 00:33:16.781 [2024-04-17 10:29:50.007504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.007591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.007601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.781 qpair failed and we were unable to recover it. 00:33:16.781 [2024-04-17 10:29:50.007778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.007891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.007901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.781 qpair failed and we were unable to recover it. 00:33:16.781 [2024-04-17 10:29:50.008073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.008241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.008251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.781 qpair failed and we were unable to recover it. 00:33:16.781 [2024-04-17 10:29:50.008391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.008552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.008562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.781 qpair failed and we were unable to recover it. 00:33:16.781 [2024-04-17 10:29:50.008671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.008846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.008855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.781 qpair failed and we were unable to recover it. 00:33:16.781 [2024-04-17 10:29:50.008950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.009117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.781 [2024-04-17 10:29:50.009128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.781 qpair failed and we were unable to recover it. 00:33:16.782 [2024-04-17 10:29:50.009294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.009389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.009400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.782 qpair failed and we were unable to recover it. 00:33:16.782 [2024-04-17 10:29:50.009493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.009653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.009663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.782 qpair failed and we were unable to recover it. 00:33:16.782 [2024-04-17 10:29:50.009955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.010061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.010070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.782 qpair failed and we were unable to recover it. 00:33:16.782 [2024-04-17 10:29:50.010173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.010354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.010364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.782 qpair failed and we were unable to recover it. 00:33:16.782 [2024-04-17 10:29:50.010594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.010765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.010775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.782 qpair failed and we were unable to recover it. 00:33:16.782 [2024-04-17 10:29:50.010967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.011140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.011150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.782 qpair failed and we were unable to recover it. 00:33:16.782 [2024-04-17 10:29:50.011249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.011351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.011361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.782 qpair failed and we were unable to recover it. 00:33:16.782 [2024-04-17 10:29:50.011554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.011806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.011817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.782 qpair failed and we were unable to recover it. 00:33:16.782 [2024-04-17 10:29:50.012048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.012152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.012162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.782 qpair failed and we were unable to recover it. 00:33:16.782 [2024-04-17 10:29:50.012272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.012479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.012489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.782 qpair failed and we were unable to recover it. 00:33:16.782 [2024-04-17 10:29:50.012670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.012788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.012798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.782 qpair failed and we were unable to recover it. 00:33:16.782 [2024-04-17 10:29:50.012961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.013119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.013129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.782 qpair failed and we were unable to recover it. 00:33:16.782 [2024-04-17 10:29:50.013363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.013554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.013565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.782 qpair failed and we were unable to recover it. 00:33:16.782 [2024-04-17 10:29:50.013745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.013850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.013862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.782 qpair failed and we were unable to recover it. 00:33:16.782 [2024-04-17 10:29:50.014026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.014231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.014243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.782 qpair failed and we were unable to recover it. 00:33:16.782 [2024-04-17 10:29:50.014416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.014579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.014599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.782 qpair failed and we were unable to recover it. 00:33:16.782 [2024-04-17 10:29:50.014797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.014938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.014972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.782 qpair failed and we were unable to recover it. 00:33:16.782 [2024-04-17 10:29:50.015172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.015345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.015373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.782 qpair failed and we were unable to recover it. 00:33:16.782 [2024-04-17 10:29:50.015527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.015738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.015765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.782 qpair failed and we were unable to recover it. 00:33:16.782 [2024-04-17 10:29:50.016025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.016207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.016219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.782 qpair failed and we were unable to recover it. 00:33:16.782 [2024-04-17 10:29:50.016400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.016619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.016631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.782 qpair failed and we were unable to recover it. 00:33:16.782 [2024-04-17 10:29:50.016754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.016942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.016953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.782 qpair failed and we were unable to recover it. 00:33:16.782 [2024-04-17 10:29:50.017165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.017332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.017343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.782 qpair failed and we were unable to recover it. 00:33:16.782 [2024-04-17 10:29:50.017469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.017647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.017658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.782 qpair failed and we were unable to recover it. 00:33:16.782 [2024-04-17 10:29:50.017779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.017891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.017901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.782 qpair failed and we were unable to recover it. 00:33:16.782 [2024-04-17 10:29:50.018164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.018339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.782 [2024-04-17 10:29:50.018350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.783 qpair failed and we were unable to recover it. 00:33:16.783 [2024-04-17 10:29:50.018435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.018593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.018604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.783 qpair failed and we were unable to recover it. 00:33:16.783 [2024-04-17 10:29:50.018712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.018879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.018889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.783 qpair failed and we were unable to recover it. 00:33:16.783 [2024-04-17 10:29:50.019001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.019170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.019180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.783 qpair failed and we were unable to recover it. 00:33:16.783 [2024-04-17 10:29:50.019303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.019412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.019422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.783 qpair failed and we were unable to recover it. 00:33:16.783 [2024-04-17 10:29:50.019523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.019710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.019721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.783 qpair failed and we were unable to recover it. 00:33:16.783 [2024-04-17 10:29:50.019965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.020176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.020186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.783 qpair failed and we were unable to recover it. 00:33:16.783 [2024-04-17 10:29:50.020361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.020465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.020475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.783 qpair failed and we were unable to recover it. 00:33:16.783 [2024-04-17 10:29:50.020670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.020793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.020803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.783 qpair failed and we were unable to recover it. 00:33:16.783 [2024-04-17 10:29:50.020993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.021159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.021169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.783 qpair failed and we were unable to recover it. 00:33:16.783 [2024-04-17 10:29:50.021340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.021511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.021522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.783 qpair failed and we were unable to recover it. 00:33:16.783 [2024-04-17 10:29:50.021693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.021894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.021904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.783 qpair failed and we were unable to recover it. 00:33:16.783 [2024-04-17 10:29:50.022007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.022234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.022244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.783 qpair failed and we were unable to recover it. 00:33:16.783 [2024-04-17 10:29:50.022408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.022569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.022579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.783 qpair failed and we were unable to recover it. 00:33:16.783 [2024-04-17 10:29:50.022743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.022914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.022924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.783 qpair failed and we were unable to recover it. 00:33:16.783 [2024-04-17 10:29:50.023035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.023223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.023233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.783 qpair failed and we were unable to recover it. 00:33:16.783 [2024-04-17 10:29:50.023430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.023552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.023562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.783 qpair failed and we were unable to recover it. 00:33:16.783 [2024-04-17 10:29:50.023658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.023848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.023859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.783 qpair failed and we were unable to recover it. 00:33:16.783 [2024-04-17 10:29:50.024057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.024237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.024247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.783 qpair failed and we were unable to recover it. 00:33:16.783 [2024-04-17 10:29:50.024392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.024488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.024498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.783 qpair failed and we were unable to recover it. 00:33:16.783 [2024-04-17 10:29:50.024611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.024712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.024722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.783 qpair failed and we were unable to recover it. 00:33:16.783 [2024-04-17 10:29:50.024899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.025020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.025030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.783 qpair failed and we were unable to recover it. 00:33:16.783 [2024-04-17 10:29:50.025136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.025301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.025312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.783 qpair failed and we were unable to recover it. 00:33:16.783 [2024-04-17 10:29:50.025431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.025592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.025602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.783 qpair failed and we were unable to recover it. 00:33:16.783 [2024-04-17 10:29:50.025870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.026123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.026133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.783 qpair failed and we were unable to recover it. 00:33:16.783 [2024-04-17 10:29:50.026441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.026634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.026648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.783 qpair failed and we were unable to recover it. 00:33:16.783 [2024-04-17 10:29:50.026848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.027137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.027147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.783 qpair failed and we were unable to recover it. 00:33:16.783 [2024-04-17 10:29:50.027328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.783 [2024-04-17 10:29:50.027454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.027465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.027607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.027698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.027709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.027871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.028043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.028053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.028232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.028341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.028352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.028608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.028725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.028735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.028896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.029071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.029081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.029259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.029364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.029374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.029477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.029658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.029668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.029829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.029988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.029998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.030162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.030272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.030281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.030379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.030483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.030493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.030697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.030877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.030888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.031090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.031224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.031235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.031343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.031509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.031519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.031678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.031799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.031810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.031976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.032207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.032217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.032320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.032412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.032422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.032554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.032691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.032701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.032930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.033188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.033198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.033447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.033663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.033673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.033924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.034085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.034095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.034204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.034464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.034474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.034665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.034854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.034865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.034987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.035218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.035231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.035421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.035664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.035675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.035847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.036022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.036034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.036232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.036485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.036495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.036668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.036864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.784 [2024-04-17 10:29:50.036874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.784 qpair failed and we were unable to recover it. 00:33:16.784 [2024-04-17 10:29:50.037040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.037278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.037290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.785 qpair failed and we were unable to recover it. 00:33:16.785 [2024-04-17 10:29:50.037453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.037665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.037678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.785 qpair failed and we were unable to recover it. 00:33:16.785 [2024-04-17 10:29:50.037961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.038242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.038254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.785 qpair failed and we were unable to recover it. 00:33:16.785 [2024-04-17 10:29:50.038427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.038548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.038593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.785 qpair failed and we were unable to recover it. 00:33:16.785 [2024-04-17 10:29:50.038813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.038958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.038982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.785 qpair failed and we were unable to recover it. 00:33:16.785 [2024-04-17 10:29:50.039203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.039440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.039465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.785 qpair failed and we were unable to recover it. 00:33:16.785 [2024-04-17 10:29:50.039626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.039778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.039794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.785 qpair failed and we were unable to recover it. 00:33:16.785 [2024-04-17 10:29:50.040050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.040277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.040305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.785 qpair failed and we were unable to recover it. 00:33:16.785 [2024-04-17 10:29:50.040507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.040670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.040716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.785 qpair failed and we were unable to recover it. 00:33:16.785 [2024-04-17 10:29:50.040898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.041122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.041142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.785 qpair failed and we were unable to recover it. 00:33:16.785 [2024-04-17 10:29:50.041268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.041391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.041403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.785 qpair failed and we were unable to recover it. 00:33:16.785 [2024-04-17 10:29:50.041512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.041687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.041700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.785 qpair failed and we were unable to recover it. 00:33:16.785 [2024-04-17 10:29:50.041822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.042009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.042022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.785 qpair failed and we were unable to recover it. 00:33:16.785 [2024-04-17 10:29:50.042153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.042455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.042467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.785 qpair failed and we were unable to recover it. 00:33:16.785 [2024-04-17 10:29:50.042641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.042740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.042751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.785 qpair failed and we were unable to recover it. 00:33:16.785 [2024-04-17 10:29:50.042862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.042978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.042989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.785 qpair failed and we were unable to recover it. 00:33:16.785 [2024-04-17 10:29:50.043151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.043254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.043264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.785 qpair failed and we were unable to recover it. 00:33:16.785 [2024-04-17 10:29:50.043523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.043755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.043765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.785 qpair failed and we were unable to recover it. 00:33:16.785 [2024-04-17 10:29:50.043885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.043986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.043997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.785 qpair failed and we were unable to recover it. 00:33:16.785 [2024-04-17 10:29:50.044107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.044280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.044292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.785 qpair failed and we were unable to recover it. 00:33:16.785 [2024-04-17 10:29:50.044458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.044565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.044575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.785 qpair failed and we were unable to recover it. 00:33:16.785 [2024-04-17 10:29:50.044687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.044917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.785 [2024-04-17 10:29:50.044927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.785 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.045095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.045270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.045279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.786 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.045467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.045592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.045603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.786 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.045874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.046068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.046079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.786 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.046259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.046420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.046432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.786 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.046695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.046875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.046886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.786 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.047144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.047271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.047281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.786 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.047385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.047559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.047569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.786 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.047746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.047978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.047990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.786 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.048193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.048357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.048368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.786 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.048547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.048718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.048729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.786 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.048905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.049141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.049150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.786 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.049321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.049532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.049542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.786 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.049659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.049759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.049769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.786 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.049967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.050142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.050155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.786 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.050319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.050447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.050457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.786 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.050635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.050810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.050821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.786 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.050924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.051076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.051086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.786 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.051255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.051426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.051437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.786 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.051634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.051824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.051836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.786 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.052013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.052199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.052209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.786 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.052399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.052592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.052602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.786 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.052797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.052974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.052985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.786 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.053165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.053369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.053380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.786 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.053599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.053834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.053849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.786 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.053970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.054091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.054101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.786 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.054247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.054442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.054454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.786 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.054605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.054866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.054877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.786 qpair failed and we were unable to recover it. 00:33:16.786 [2024-04-17 10:29:50.055107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.055211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.786 [2024-04-17 10:29:50.055221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.787 [2024-04-17 10:29:50.055397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.055569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.055580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.787 [2024-04-17 10:29:50.055746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.055922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.055933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.787 [2024-04-17 10:29:50.056097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.056325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.056336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.787 [2024-04-17 10:29:50.056431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.056547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.056558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.787 [2024-04-17 10:29:50.056734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.057017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.057028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.787 [2024-04-17 10:29:50.057188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.057306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.057318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.787 [2024-04-17 10:29:50.057553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.057717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.057727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.787 [2024-04-17 10:29:50.057909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.058079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.058089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.787 [2024-04-17 10:29:50.058241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.058514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.058524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.787 [2024-04-17 10:29:50.058648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.058854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.058865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.787 [2024-04-17 10:29:50.059147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.059384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.059394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.787 [2024-04-17 10:29:50.059492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.059652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.059663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.787 [2024-04-17 10:29:50.059805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.059975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.059985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.787 [2024-04-17 10:29:50.060171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.060362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.060372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.787 [2024-04-17 10:29:50.060568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.060804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.060814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.787 [2024-04-17 10:29:50.060925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.061127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.061136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.787 [2024-04-17 10:29:50.061243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.061405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.061414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.787 [2024-04-17 10:29:50.061517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.061627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.061637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.787 [2024-04-17 10:29:50.061799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.061985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.061995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.787 [2024-04-17 10:29:50.062105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.062303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.062313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.787 [2024-04-17 10:29:50.062547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.062800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.062810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.787 [2024-04-17 10:29:50.062912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.063080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.063091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.787 [2024-04-17 10:29:50.063329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.063508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.063518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.787 [2024-04-17 10:29:50.063633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.063807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.063818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.787 [2024-04-17 10:29:50.063982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.064238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.064249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.787 [2024-04-17 10:29:50.064338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.064440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.064450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.787 [2024-04-17 10:29:50.064620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.064804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.787 [2024-04-17 10:29:50.064815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.787 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.065039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.065248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.065279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.788 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.065428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.065668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.065699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.788 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.065931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.066097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.066127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.788 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.066335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.066544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.066574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.788 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.066898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.067003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.067013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.788 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.067186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.067457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.067487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.788 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.067636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.067815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.067827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.788 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.068043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.068318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.068348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.788 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.068587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.068731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.068762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.788 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.068908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.069180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.069191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.788 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.069353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.069532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.069542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.788 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.069721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.069849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.069859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.788 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.070117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.070281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.070291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.788 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.070415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.070587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.070597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.788 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.070825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.071079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.071089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.788 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.071202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.071397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.071408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.788 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.071641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.071825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.071837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.788 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.071935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.072092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.072102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.788 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.072367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.072496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.072525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.788 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.072758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.072877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.072888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.788 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.073088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.073374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.073384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.788 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.073571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.073733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.073745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.788 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.073914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.074154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.074164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.788 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.074262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.074444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.074454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.788 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.074557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.074845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.074857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.788 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.075020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.075220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.075231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.788 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.075349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.075524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:16.788 [2024-04-17 10:29:50.075535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:16.788 qpair failed and we were unable to recover it. 00:33:16.788 [2024-04-17 10:29:50.075712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.064 [2024-04-17 10:29:50.075887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.064 [2024-04-17 10:29:50.075897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.064 qpair failed and we were unable to recover it. 00:33:17.064 [2024-04-17 10:29:50.076041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.064 [2024-04-17 10:29:50.076120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.064 [2024-04-17 10:29:50.076131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.064 qpair failed and we were unable to recover it. 00:33:17.064 [2024-04-17 10:29:50.076247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.064 [2024-04-17 10:29:50.076437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.064 [2024-04-17 10:29:50.076448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.064 qpair failed and we were unable to recover it. 00:33:17.064 [2024-04-17 10:29:50.076627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.064 [2024-04-17 10:29:50.076736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.064 [2024-04-17 10:29:50.076746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.064 qpair failed and we were unable to recover it. 00:33:17.064 [2024-04-17 10:29:50.076860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.064 [2024-04-17 10:29:50.076946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.064 [2024-04-17 10:29:50.076956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.064 qpair failed and we were unable to recover it. 00:33:17.064 [2024-04-17 10:29:50.077187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.064 [2024-04-17 10:29:50.077346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.064 [2024-04-17 10:29:50.077357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.064 qpair failed and we were unable to recover it. 00:33:17.064 [2024-04-17 10:29:50.077465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.064 [2024-04-17 10:29:50.077583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.064 [2024-04-17 10:29:50.077594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.064 qpair failed and we were unable to recover it. 00:33:17.064 [2024-04-17 10:29:50.077851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.078045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.078056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.065 qpair failed and we were unable to recover it. 00:33:17.065 [2024-04-17 10:29:50.078186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.078363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.078374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.065 qpair failed and we were unable to recover it. 00:33:17.065 [2024-04-17 10:29:50.078671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.078872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.078903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.065 qpair failed and we were unable to recover it. 00:33:17.065 [2024-04-17 10:29:50.079113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.079328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.079358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.065 qpair failed and we were unable to recover it. 00:33:17.065 [2024-04-17 10:29:50.079541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.079695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.079730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.065 qpair failed and we were unable to recover it. 00:33:17.065 [2024-04-17 10:29:50.079944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.080142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.080153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.065 qpair failed and we were unable to recover it. 00:33:17.065 [2024-04-17 10:29:50.080332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.080449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.080459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.065 qpair failed and we were unable to recover it. 00:33:17.065 [2024-04-17 10:29:50.080630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.080755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.080766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.065 qpair failed and we were unable to recover it. 00:33:17.065 [2024-04-17 10:29:50.080877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.081135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.081145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.065 qpair failed and we were unable to recover it. 00:33:17.065 [2024-04-17 10:29:50.081244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.081363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.081373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.065 qpair failed and we were unable to recover it. 00:33:17.065 [2024-04-17 10:29:50.081546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.081661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.081672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.065 qpair failed and we were unable to recover it. 00:33:17.065 [2024-04-17 10:29:50.081809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.082052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.082062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.065 qpair failed and we were unable to recover it. 00:33:17.065 [2024-04-17 10:29:50.082284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.082469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.082498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.065 qpair failed and we were unable to recover it. 00:33:17.065 [2024-04-17 10:29:50.082661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.082817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.082847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.065 qpair failed and we were unable to recover it. 00:33:17.065 [2024-04-17 10:29:50.083107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.083247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.083277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.065 qpair failed and we were unable to recover it. 00:33:17.065 [2024-04-17 10:29:50.083427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.083587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.083618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.065 qpair failed and we were unable to recover it. 00:33:17.065 [2024-04-17 10:29:50.083839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.084057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.084087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.065 qpair failed and we were unable to recover it. 00:33:17.065 [2024-04-17 10:29:50.084364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.084564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.084593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.065 qpair failed and we were unable to recover it. 00:33:17.065 [2024-04-17 10:29:50.084872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.085131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.085141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.065 qpair failed and we were unable to recover it. 00:33:17.065 [2024-04-17 10:29:50.085398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.085531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.085541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.065 qpair failed and we were unable to recover it. 00:33:17.065 [2024-04-17 10:29:50.085718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.085970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.085980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.065 qpair failed and we were unable to recover it. 00:33:17.065 [2024-04-17 10:29:50.086079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.086151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.086161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.065 qpair failed and we were unable to recover it. 00:33:17.065 [2024-04-17 10:29:50.086289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.086400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.086410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.065 qpair failed and we were unable to recover it. 00:33:17.065 [2024-04-17 10:29:50.086590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.086696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.086707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.065 qpair failed and we were unable to recover it. 00:33:17.065 [2024-04-17 10:29:50.086883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.086998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.087008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.065 qpair failed and we were unable to recover it. 00:33:17.065 [2024-04-17 10:29:50.087099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.087269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.065 [2024-04-17 10:29:50.087279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.065 qpair failed and we were unable to recover it. 00:33:17.065 [2024-04-17 10:29:50.087391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.087547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.087577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.087779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.087896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.087907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.088001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.088258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.088268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.088381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.088482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.088492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.088659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.088768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.088779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.089009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.089098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.089108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.089213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.089420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.089430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.089504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.089668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.089678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.089866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.090022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.090033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.090279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.090396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.090407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.090579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.090754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.090765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.090923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.091071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.091082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.091246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.091489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.091519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.091710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.092001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.092031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.092250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.092461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.092491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.092709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.092951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.092980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.093122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.093304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.093335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.093549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.093720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.093751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.093902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.094014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.094024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.094201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.094313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.094323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.094556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.094657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.094668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.094844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.094960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.094969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.095080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.095274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.095284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.095543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.095745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.095755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.095967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.096157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.096168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.096337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.096515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.096527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.096734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.096915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.096934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.097152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.097282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.097302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.066 qpair failed and we were unable to recover it. 00:33:17.066 [2024-04-17 10:29:50.097483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.097706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.066 [2024-04-17 10:29:50.097732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.067 qpair failed and we were unable to recover it. 00:33:17.067 [2024-04-17 10:29:50.097989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.098107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.098122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.067 qpair failed and we were unable to recover it. 00:33:17.067 [2024-04-17 10:29:50.098298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.098588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.098604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.067 qpair failed and we were unable to recover it. 00:33:17.067 [2024-04-17 10:29:50.098895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.099019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.099035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.067 qpair failed and we were unable to recover it. 00:33:17.067 [2024-04-17 10:29:50.099254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.099429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.099445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.067 qpair failed and we were unable to recover it. 00:33:17.067 [2024-04-17 10:29:50.099639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.099800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.099815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.067 qpair failed and we were unable to recover it. 00:33:17.067 [2024-04-17 10:29:50.099996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.100098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.100113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.067 qpair failed and we were unable to recover it. 00:33:17.067 [2024-04-17 10:29:50.100285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.100554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.100569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.067 qpair failed and we were unable to recover it. 00:33:17.067 [2024-04-17 10:29:50.100700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.100824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.100838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.067 qpair failed and we were unable to recover it. 00:33:17.067 [2024-04-17 10:29:50.101060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.101183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.101199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.067 qpair failed and we were unable to recover it. 00:33:17.067 [2024-04-17 10:29:50.101312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.101434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.101448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.067 qpair failed and we were unable to recover it. 00:33:17.067 [2024-04-17 10:29:50.101563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.101753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.101770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.067 qpair failed and we were unable to recover it. 00:33:17.067 [2024-04-17 10:29:50.101890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.102015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.102030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.067 qpair failed and we were unable to recover it. 00:33:17.067 [2024-04-17 10:29:50.102132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.102234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.102248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.067 qpair failed and we were unable to recover it. 00:33:17.067 [2024-04-17 10:29:50.102422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.102599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.102614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.067 qpair failed and we were unable to recover it. 00:33:17.067 [2024-04-17 10:29:50.102806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.102984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.103001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.067 qpair failed and we were unable to recover it. 00:33:17.067 [2024-04-17 10:29:50.103197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.103372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.103387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.067 qpair failed and we were unable to recover it. 00:33:17.067 [2024-04-17 10:29:50.103578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.103759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.103775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.067 qpair failed and we were unable to recover it. 00:33:17.067 [2024-04-17 10:29:50.104017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.104205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.104221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.067 qpair failed and we were unable to recover it. 00:33:17.067 [2024-04-17 10:29:50.104472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.104582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.104599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.067 qpair failed and we were unable to recover it. 00:33:17.067 [2024-04-17 10:29:50.104795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.105041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.105057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.067 qpair failed and we were unable to recover it. 00:33:17.067 [2024-04-17 10:29:50.105251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.105479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.105501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.067 qpair failed and we were unable to recover it. 00:33:17.067 [2024-04-17 10:29:50.105694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.105821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.105836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.067 qpair failed and we were unable to recover it. 00:33:17.067 [2024-04-17 10:29:50.105938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.106135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.106150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.067 qpair failed and we were unable to recover it. 00:33:17.067 [2024-04-17 10:29:50.106340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.106531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.106549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.067 qpair failed and we were unable to recover it. 00:33:17.067 [2024-04-17 10:29:50.106663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.106783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.067 [2024-04-17 10:29:50.106798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.067 qpair failed and we were unable to recover it. 00:33:17.068 [2024-04-17 10:29:50.106922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.107162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.107178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.068 qpair failed and we were unable to recover it. 00:33:17.068 [2024-04-17 10:29:50.107367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.107477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.107493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.068 qpair failed and we were unable to recover it. 00:33:17.068 [2024-04-17 10:29:50.107697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.107828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.107843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.068 qpair failed and we were unable to recover it. 00:33:17.068 [2024-04-17 10:29:50.108020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.108205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.108221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.068 qpair failed and we were unable to recover it. 00:33:17.068 [2024-04-17 10:29:50.108459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.108719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.108737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.068 qpair failed and we were unable to recover it. 00:33:17.068 [2024-04-17 10:29:50.108860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.108976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.108997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.068 qpair failed and we were unable to recover it. 00:33:17.068 [2024-04-17 10:29:50.109104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.109344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.109360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.068 qpair failed and we were unable to recover it. 00:33:17.068 [2024-04-17 10:29:50.109559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.109825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.109842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.068 qpair failed and we were unable to recover it. 00:33:17.068 [2024-04-17 10:29:50.109960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.110145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.110159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.068 qpair failed and we were unable to recover it. 00:33:17.068 [2024-04-17 10:29:50.110366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.110559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.110576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.068 qpair failed and we were unable to recover it. 00:33:17.068 [2024-04-17 10:29:50.110768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.110897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.110912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.068 qpair failed and we were unable to recover it. 00:33:17.068 [2024-04-17 10:29:50.111043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.111225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.111241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.068 qpair failed and we were unable to recover it. 00:33:17.068 [2024-04-17 10:29:50.111440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.111557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.111572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.068 qpair failed and we were unable to recover it. 00:33:17.068 [2024-04-17 10:29:50.111688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.111858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.111872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.068 qpair failed and we were unable to recover it. 00:33:17.068 [2024-04-17 10:29:50.112095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.112282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.112300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.068 qpair failed and we were unable to recover it. 00:33:17.068 [2024-04-17 10:29:50.112485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.112672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.112716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.068 qpair failed and we were unable to recover it. 00:33:17.068 [2024-04-17 10:29:50.112894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.113113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.113143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.068 qpair failed and we were unable to recover it. 00:33:17.068 [2024-04-17 10:29:50.113300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.113505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.113537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.068 qpair failed and we were unable to recover it. 00:33:17.068 [2024-04-17 10:29:50.113672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.113846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.113857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.068 qpair failed and we were unable to recover it. 00:33:17.068 [2024-04-17 10:29:50.114079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.114204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.114216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.068 qpair failed and we were unable to recover it. 00:33:17.068 [2024-04-17 10:29:50.114377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.114543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.114555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.068 qpair failed and we were unable to recover it. 00:33:17.068 [2024-04-17 10:29:50.114817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.114983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.114994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.068 qpair failed and we were unable to recover it. 00:33:17.068 [2024-04-17 10:29:50.115081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.115267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.115277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.068 qpair failed and we were unable to recover it. 00:33:17.068 [2024-04-17 10:29:50.115446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.115562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.115572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.068 qpair failed and we were unable to recover it. 00:33:17.068 [2024-04-17 10:29:50.115832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.116024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.116035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.068 qpair failed and we were unable to recover it. 00:33:17.068 [2024-04-17 10:29:50.116267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.116383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.116394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.068 qpair failed and we were unable to recover it. 00:33:17.068 [2024-04-17 10:29:50.116574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.068 [2024-04-17 10:29:50.116749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.116762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.069 qpair failed and we were unable to recover it. 00:33:17.069 [2024-04-17 10:29:50.116922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.117097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.117128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.069 qpair failed and we were unable to recover it. 00:33:17.069 [2024-04-17 10:29:50.117406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.117715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.117759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.069 qpair failed and we were unable to recover it. 00:33:17.069 [2024-04-17 10:29:50.117925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.118195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.118225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.069 qpair failed and we were unable to recover it. 00:33:17.069 [2024-04-17 10:29:50.118363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.118564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.118594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.069 qpair failed and we were unable to recover it. 00:33:17.069 [2024-04-17 10:29:50.118847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.119071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.119101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.069 qpair failed and we were unable to recover it. 00:33:17.069 [2024-04-17 10:29:50.119278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.119521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.119550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.069 qpair failed and we were unable to recover it. 00:33:17.069 [2024-04-17 10:29:50.119710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.120000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.120011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.069 qpair failed and we were unable to recover it. 00:33:17.069 [2024-04-17 10:29:50.120117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.120240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.120251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.069 qpair failed and we were unable to recover it. 00:33:17.069 [2024-04-17 10:29:50.120528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.120651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.120662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.069 qpair failed and we were unable to recover it. 00:33:17.069 [2024-04-17 10:29:50.120775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.120882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.120892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.069 qpair failed and we were unable to recover it. 00:33:17.069 [2024-04-17 10:29:50.121121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.121343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.121354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.069 qpair failed and we were unable to recover it. 00:33:17.069 [2024-04-17 10:29:50.121627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.121810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.121820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.069 qpair failed and we were unable to recover it. 00:33:17.069 [2024-04-17 10:29:50.121985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.122098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.122109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.069 qpair failed and we were unable to recover it. 00:33:17.069 [2024-04-17 10:29:50.122365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.122580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.122610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.069 qpair failed and we were unable to recover it. 00:33:17.069 [2024-04-17 10:29:50.122907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.123118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.123128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.069 qpair failed and we were unable to recover it. 00:33:17.069 [2024-04-17 10:29:50.123289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.123548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.123558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.069 qpair failed and we were unable to recover it. 00:33:17.069 [2024-04-17 10:29:50.123791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.123954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.123965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.069 qpair failed and we were unable to recover it. 00:33:17.069 [2024-04-17 10:29:50.124128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.124241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.124251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.069 qpair failed and we were unable to recover it. 00:33:17.069 [2024-04-17 10:29:50.124367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.124533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.124544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.069 qpair failed and we were unable to recover it. 00:33:17.069 [2024-04-17 10:29:50.124651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.124767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.069 [2024-04-17 10:29:50.124777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.069 qpair failed and we were unable to recover it. 00:33:17.069 [2024-04-17 10:29:50.124948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.125234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.125264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.070 qpair failed and we were unable to recover it. 00:33:17.070 [2024-04-17 10:29:50.125537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.125712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.125743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.070 qpair failed and we were unable to recover it. 00:33:17.070 [2024-04-17 10:29:50.126059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.126171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.126181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.070 qpair failed and we were unable to recover it. 00:33:17.070 [2024-04-17 10:29:50.126283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.126443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.126453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.070 qpair failed and we were unable to recover it. 00:33:17.070 [2024-04-17 10:29:50.126690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.126865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.126875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.070 qpair failed and we were unable to recover it. 00:33:17.070 [2024-04-17 10:29:50.127000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.127257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.127292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.070 qpair failed and we were unable to recover it. 00:33:17.070 [2024-04-17 10:29:50.127583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.127751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.127786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.070 qpair failed and we were unable to recover it. 00:33:17.070 [2024-04-17 10:29:50.127996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.128325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.128358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.070 qpair failed and we were unable to recover it. 00:33:17.070 [2024-04-17 10:29:50.128521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.128687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.128719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.070 qpair failed and we were unable to recover it. 00:33:17.070 [2024-04-17 10:29:50.128915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.129149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.129189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.070 qpair failed and we were unable to recover it. 00:33:17.070 [2024-04-17 10:29:50.129509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.129726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.129761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.070 qpair failed and we were unable to recover it. 00:33:17.070 [2024-04-17 10:29:50.129926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.130068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.130098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.070 qpair failed and we were unable to recover it. 00:33:17.070 [2024-04-17 10:29:50.130333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.130491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.130521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.070 qpair failed and we were unable to recover it. 00:33:17.070 [2024-04-17 10:29:50.130673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.130993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.131007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.070 qpair failed and we were unable to recover it. 00:33:17.070 [2024-04-17 10:29:50.131255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.131440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.131451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.070 qpair failed and we were unable to recover it. 00:33:17.070 [2024-04-17 10:29:50.131543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.131659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.131670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.070 qpair failed and we were unable to recover it. 00:33:17.070 [2024-04-17 10:29:50.131853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.131952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.131962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.070 qpair failed and we were unable to recover it. 00:33:17.070 [2024-04-17 10:29:50.132125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.132298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.132308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.070 qpair failed and we were unable to recover it. 00:33:17.070 [2024-04-17 10:29:50.132475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.132687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.132697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.070 qpair failed and we were unable to recover it. 00:33:17.070 [2024-04-17 10:29:50.132870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.132992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.133002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.070 qpair failed and we were unable to recover it. 00:33:17.070 [2024-04-17 10:29:50.133234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.133410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.133420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.070 qpair failed and we were unable to recover it. 00:33:17.070 [2024-04-17 10:29:50.133591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.133791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.133802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.070 qpair failed and we were unable to recover it. 00:33:17.070 [2024-04-17 10:29:50.134000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.134175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.134217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.070 qpair failed and we were unable to recover it. 00:33:17.070 [2024-04-17 10:29:50.134442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.134710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.134741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.070 qpair failed and we were unable to recover it. 00:33:17.070 [2024-04-17 10:29:50.134951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.135169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.135199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.070 qpair failed and we were unable to recover it. 00:33:17.070 [2024-04-17 10:29:50.135469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.135613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.070 [2024-04-17 10:29:50.135641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.070 qpair failed and we were unable to recover it. 00:33:17.071 [2024-04-17 10:29:50.135789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.135944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.135954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.071 qpair failed and we were unable to recover it. 00:33:17.071 [2024-04-17 10:29:50.136125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.136301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.136331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.071 qpair failed and we were unable to recover it. 00:33:17.071 [2024-04-17 10:29:50.136500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.136744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.136776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.071 qpair failed and we were unable to recover it. 00:33:17.071 [2024-04-17 10:29:50.137025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.137283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.137293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.071 qpair failed and we were unable to recover it. 00:33:17.071 [2024-04-17 10:29:50.137455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.137652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.137662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.071 qpair failed and we were unable to recover it. 00:33:17.071 [2024-04-17 10:29:50.137820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.138019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.138029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.071 qpair failed and we were unable to recover it. 00:33:17.071 [2024-04-17 10:29:50.138190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.138312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.138322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.071 qpair failed and we were unable to recover it. 00:33:17.071 [2024-04-17 10:29:50.138501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.138824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.138835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.071 qpair failed and we were unable to recover it. 00:33:17.071 [2024-04-17 10:29:50.138937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.139038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.139048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.071 qpair failed and we were unable to recover it. 00:33:17.071 [2024-04-17 10:29:50.139308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.139513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.139523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.071 qpair failed and we were unable to recover it. 00:33:17.071 [2024-04-17 10:29:50.139703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.139946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.139957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.071 qpair failed and we were unable to recover it. 00:33:17.071 [2024-04-17 10:29:50.140153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.140331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.140360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.071 qpair failed and we were unable to recover it. 00:33:17.071 [2024-04-17 10:29:50.140516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.140669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.140702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.071 qpair failed and we were unable to recover it. 00:33:17.071 [2024-04-17 10:29:50.140866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.141020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.141053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.071 qpair failed and we were unable to recover it. 00:33:17.071 [2024-04-17 10:29:50.141226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.141528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.141559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.071 qpair failed and we were unable to recover it. 00:33:17.071 [2024-04-17 10:29:50.141784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.141901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.141931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.071 qpair failed and we were unable to recover it. 00:33:17.071 [2024-04-17 10:29:50.142153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.142306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.142335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.071 qpair failed and we were unable to recover it. 00:33:17.071 [2024-04-17 10:29:50.142501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.142724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.142756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.071 qpair failed and we were unable to recover it. 00:33:17.071 [2024-04-17 10:29:50.142979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.143238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.143249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.071 qpair failed and we were unable to recover it. 00:33:17.071 [2024-04-17 10:29:50.143422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.143596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.143605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.071 qpair failed and we were unable to recover it. 00:33:17.071 [2024-04-17 10:29:50.143858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.144093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.144104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.071 qpair failed and we were unable to recover it. 00:33:17.071 [2024-04-17 10:29:50.144215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.144377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.144387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.071 qpair failed and we were unable to recover it. 00:33:17.071 [2024-04-17 10:29:50.144557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.144663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.144674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.071 qpair failed and we were unable to recover it. 00:33:17.071 [2024-04-17 10:29:50.144832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.144953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.144964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.071 qpair failed and we were unable to recover it. 00:33:17.071 [2024-04-17 10:29:50.145070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.145158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.145169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.071 qpair failed and we were unable to recover it. 00:33:17.071 [2024-04-17 10:29:50.145370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.145556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.071 [2024-04-17 10:29:50.145566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.071 qpair failed and we were unable to recover it. 00:33:17.071 [2024-04-17 10:29:50.145667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.145780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.145791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.072 qpair failed and we were unable to recover it. 00:33:17.072 [2024-04-17 10:29:50.145989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.146167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.146197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.072 qpair failed and we were unable to recover it. 00:33:17.072 [2024-04-17 10:29:50.146420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.146655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.146687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.072 qpair failed and we were unable to recover it. 00:33:17.072 [2024-04-17 10:29:50.146942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.147156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.147186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.072 qpair failed and we were unable to recover it. 00:33:17.072 [2024-04-17 10:29:50.147322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.147542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.147574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.072 qpair failed and we were unable to recover it. 00:33:17.072 [2024-04-17 10:29:50.147791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.147997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.148026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.072 qpair failed and we were unable to recover it. 00:33:17.072 [2024-04-17 10:29:50.148248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.148387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.148417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.072 qpair failed and we were unable to recover it. 00:33:17.072 [2024-04-17 10:29:50.148611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.148809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.148820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.072 qpair failed and we were unable to recover it. 00:33:17.072 [2024-04-17 10:29:50.148923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.149101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.149111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.072 qpair failed and we were unable to recover it. 00:33:17.072 [2024-04-17 10:29:50.149221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.149339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.149349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.072 qpair failed and we were unable to recover it. 00:33:17.072 [2024-04-17 10:29:50.149459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.149565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.149575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.072 qpair failed and we were unable to recover it. 00:33:17.072 [2024-04-17 10:29:50.149780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.149978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.149988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.072 qpair failed and we were unable to recover it. 00:33:17.072 [2024-04-17 10:29:50.150248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.150421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.150431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.072 qpair failed and we were unable to recover it. 00:33:17.072 [2024-04-17 10:29:50.150610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.150718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.150732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.072 qpair failed and we were unable to recover it. 00:33:17.072 [2024-04-17 10:29:50.150936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.151042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.151051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.072 qpair failed and we were unable to recover it. 00:33:17.072 [2024-04-17 10:29:50.151234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.151352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.151362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.072 qpair failed and we were unable to recover it. 00:33:17.072 [2024-04-17 10:29:50.151594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.151697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.151719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.072 qpair failed and we were unable to recover it. 00:33:17.072 [2024-04-17 10:29:50.151952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.152139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.152170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.072 qpair failed and we were unable to recover it. 00:33:17.072 [2024-04-17 10:29:50.152321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.152633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.152683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.072 qpair failed and we were unable to recover it. 00:33:17.072 [2024-04-17 10:29:50.152963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.153124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.153155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.072 qpair failed and we were unable to recover it. 00:33:17.072 [2024-04-17 10:29:50.153386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.153599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.153629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.072 qpair failed and we were unable to recover it. 00:33:17.072 [2024-04-17 10:29:50.153879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.154017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.154053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.072 qpair failed and we were unable to recover it. 00:33:17.072 [2024-04-17 10:29:50.154210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.154490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.154520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.072 qpair failed and we were unable to recover it. 00:33:17.072 [2024-04-17 10:29:50.154678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.154829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.154860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.072 qpair failed and we were unable to recover it. 00:33:17.072 [2024-04-17 10:29:50.155078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.155369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.155380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.072 qpair failed and we were unable to recover it. 00:33:17.072 [2024-04-17 10:29:50.155575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.155738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.155749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.072 qpair failed and we were unable to recover it. 00:33:17.072 [2024-04-17 10:29:50.155925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.072 [2024-04-17 10:29:50.156129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.156139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.073 [2024-04-17 10:29:50.156253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.156528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.156539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.073 [2024-04-17 10:29:50.156652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.156846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.156856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.073 [2024-04-17 10:29:50.156964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.157168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.157179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.073 [2024-04-17 10:29:50.157288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.157407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.157418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.073 [2024-04-17 10:29:50.157538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.157667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.157678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.073 [2024-04-17 10:29:50.157857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.157970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.157981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.073 [2024-04-17 10:29:50.158093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.158206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.158216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.073 [2024-04-17 10:29:50.158387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.158563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.158596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.073 [2024-04-17 10:29:50.158767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.158907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.158945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.073 [2024-04-17 10:29:50.159170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.159330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.159341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.073 [2024-04-17 10:29:50.159445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.159641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.159657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.073 [2024-04-17 10:29:50.159888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.160081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.160111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.073 [2024-04-17 10:29:50.160347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.160493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.160523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.073 [2024-04-17 10:29:50.160811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.161048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.161058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.073 [2024-04-17 10:29:50.161172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.161267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.161279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.073 [2024-04-17 10:29:50.161442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.161613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.161624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.073 [2024-04-17 10:29:50.161715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.161956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.161967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.073 [2024-04-17 10:29:50.162161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.162257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.162267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.073 [2024-04-17 10:29:50.162362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.162484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.162495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.073 [2024-04-17 10:29:50.162658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.162776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.162790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.073 [2024-04-17 10:29:50.162957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.163155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.163169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.073 [2024-04-17 10:29:50.163351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.163465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.163477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.073 [2024-04-17 10:29:50.163719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.163835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.163849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.073 [2024-04-17 10:29:50.164017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.164196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.164229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.073 [2024-04-17 10:29:50.164393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.164591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.164621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.073 [2024-04-17 10:29:50.164844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.165079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.165092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.073 [2024-04-17 10:29:50.165328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.165480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.073 [2024-04-17 10:29:50.165510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.073 qpair failed and we were unable to recover it. 00:33:17.074 [2024-04-17 10:29:50.165701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.166005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.166046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.074 qpair failed and we were unable to recover it. 00:33:17.074 [2024-04-17 10:29:50.166204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.166445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.166476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.074 qpair failed and we were unable to recover it. 00:33:17.074 [2024-04-17 10:29:50.166696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.166985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.167029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.074 qpair failed and we were unable to recover it. 00:33:17.074 [2024-04-17 10:29:50.167137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.167333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.167349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.074 qpair failed and we were unable to recover it. 00:33:17.074 [2024-04-17 10:29:50.167613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.167784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.167795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.074 qpair failed and we were unable to recover it. 00:33:17.074 [2024-04-17 10:29:50.167971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.168078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.168088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.074 qpair failed and we were unable to recover it. 00:33:17.074 [2024-04-17 10:29:50.168262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.168435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.168447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.074 qpair failed and we were unable to recover it. 00:33:17.074 [2024-04-17 10:29:50.168627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.168782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.168818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.074 qpair failed and we were unable to recover it. 00:33:17.074 [2024-04-17 10:29:50.168956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.169189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.169220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.074 qpair failed and we were unable to recover it. 00:33:17.074 [2024-04-17 10:29:50.169396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.169723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.169758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.074 qpair failed and we were unable to recover it. 00:33:17.074 [2024-04-17 10:29:50.169944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.170112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.170142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.074 qpair failed and we were unable to recover it. 00:33:17.074 [2024-04-17 10:29:50.170294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.170515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.170549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.074 qpair failed and we were unable to recover it. 00:33:17.074 [2024-04-17 10:29:50.170715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.171167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.171178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.074 qpair failed and we were unable to recover it. 00:33:17.074 [2024-04-17 10:29:50.171291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.171453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.171467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.074 qpair failed and we were unable to recover it. 00:33:17.074 [2024-04-17 10:29:50.171656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.171832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.171843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.074 qpair failed and we were unable to recover it. 00:33:17.074 [2024-04-17 10:29:50.171935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.172139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.172149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.074 qpair failed and we were unable to recover it. 00:33:17.074 [2024-04-17 10:29:50.172314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.172573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.172586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.074 qpair failed and we were unable to recover it. 00:33:17.074 [2024-04-17 10:29:50.172705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.172941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.172953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.074 qpair failed and we were unable to recover it. 00:33:17.074 [2024-04-17 10:29:50.173189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.173350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.173361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.074 qpair failed and we were unable to recover it. 00:33:17.074 [2024-04-17 10:29:50.173567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.173798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.173810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.074 qpair failed and we were unable to recover it. 00:33:17.074 [2024-04-17 10:29:50.174064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.174367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.174397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.074 qpair failed and we were unable to recover it. 00:33:17.074 [2024-04-17 10:29:50.174535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.174712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.174756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.074 qpair failed and we were unable to recover it. 00:33:17.074 [2024-04-17 10:29:50.174989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.074 [2024-04-17 10:29:50.175096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.175106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.175342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.175550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.175563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.175678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.175872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.175884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.176147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.176356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.176386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.176549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.176768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.176779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.177019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.177157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.177188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.177426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.177628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.177669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.177949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.178170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.178202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.178478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.178745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.178777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.178920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.179189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.179220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.179444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.179610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.179639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.179862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.180009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.180040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.180241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.180360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.180390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.180609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.180781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.180814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.181092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.181316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.181349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.181486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.181756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.181788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.182005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.182198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.182209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.182347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.182633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.182680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.182893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.183118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.183148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.183425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.183534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.183545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.183705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.183908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.183937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.184160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.184405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.184435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.184606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.184762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.184793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.184952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.185164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.185194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.185398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.185619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.185657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.185809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.186025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.186055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.186269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.186365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.186394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.186675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.186918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.186930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.187058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.187267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.187298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.075 qpair failed and we were unable to recover it. 00:33:17.075 [2024-04-17 10:29:50.187508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.075 [2024-04-17 10:29:50.187684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.187717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.076 qpair failed and we were unable to recover it. 00:33:17.076 [2024-04-17 10:29:50.187859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.188065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.188096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.076 qpair failed and we were unable to recover it. 00:33:17.076 [2024-04-17 10:29:50.188318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.188525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.188555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.076 qpair failed and we were unable to recover it. 00:33:17.076 [2024-04-17 10:29:50.188715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.188875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.188907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.076 qpair failed and we were unable to recover it. 00:33:17.076 [2024-04-17 10:29:50.189214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.189511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.189541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.076 qpair failed and we were unable to recover it. 00:33:17.076 [2024-04-17 10:29:50.189762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.190001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.190031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.076 qpair failed and we were unable to recover it. 00:33:17.076 [2024-04-17 10:29:50.190182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.190349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.190379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.076 qpair failed and we were unable to recover it. 00:33:17.076 [2024-04-17 10:29:50.190600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.190744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.190774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.076 qpair failed and we were unable to recover it. 00:33:17.076 [2024-04-17 10:29:50.191052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.191258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.191269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.076 qpair failed and we were unable to recover it. 00:33:17.076 [2024-04-17 10:29:50.191402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.191599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.191629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.076 qpair failed and we were unable to recover it. 00:33:17.076 [2024-04-17 10:29:50.191851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.192051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.192080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.076 qpair failed and we were unable to recover it. 00:33:17.076 [2024-04-17 10:29:50.192237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.192448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.192480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.076 qpair failed and we were unable to recover it. 00:33:17.076 [2024-04-17 10:29:50.192694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.192964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.192994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.076 qpair failed and we were unable to recover it. 00:33:17.076 [2024-04-17 10:29:50.193293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.193471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.193482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.076 qpair failed and we were unable to recover it. 00:33:17.076 [2024-04-17 10:29:50.193597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.193808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.193841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.076 qpair failed and we were unable to recover it. 00:33:17.076 [2024-04-17 10:29:50.194124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.194420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.194457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.076 qpair failed and we were unable to recover it. 00:33:17.076 [2024-04-17 10:29:50.194689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.194893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.194922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.076 qpair failed and we were unable to recover it. 00:33:17.076 [2024-04-17 10:29:50.195134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.195408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.195438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.076 qpair failed and we were unable to recover it. 00:33:17.076 [2024-04-17 10:29:50.195700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.195853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.195884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.076 qpair failed and we were unable to recover it. 00:33:17.076 [2024-04-17 10:29:50.196098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.196208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.196218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.076 qpair failed and we were unable to recover it. 00:33:17.076 [2024-04-17 10:29:50.196422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.196696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.196738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.076 qpair failed and we were unable to recover it. 00:33:17.076 [2024-04-17 10:29:50.196951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.197163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.197192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.076 qpair failed and we were unable to recover it. 00:33:17.076 [2024-04-17 10:29:50.197522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.197824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.197861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.076 qpair failed and we were unable to recover it. 00:33:17.076 [2024-04-17 10:29:50.198095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.198315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.198346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.076 qpair failed and we were unable to recover it. 00:33:17.076 [2024-04-17 10:29:50.198498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.198693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.198724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.076 qpair failed and we were unable to recover it. 00:33:17.076 [2024-04-17 10:29:50.198877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.199032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.199063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.076 qpair failed and we were unable to recover it. 00:33:17.076 [2024-04-17 10:29:50.199313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.199560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.199590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.076 qpair failed and we were unable to recover it. 00:33:17.076 [2024-04-17 10:29:50.199922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.076 [2024-04-17 10:29:50.200202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.200235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.200455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.200662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.200693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.200973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.201104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.201141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.201384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.201584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.201614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.201762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.202089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.202119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.202342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.202577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.202607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.202835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.203050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.203079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.203260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.203397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.203426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.203673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.203879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.203909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.204030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.204122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.204132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.204342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.204612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.204657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.204885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.205110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.205140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.205343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.205538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.205579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.205838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.206061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.206091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.206333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.206452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.206483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.206744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.206968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.206999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.207120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.207361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.207391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.207610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.207851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.207883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.208092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.208312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.208342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.208549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.208762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.208803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.209108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.209214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.209223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.209391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.209548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.209578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.209784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.209902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.209914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.210026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.210128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.210138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.210394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.210572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.210582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.210690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.210869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.210880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.211113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.211215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.211227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.211402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.211518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.211529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.077 qpair failed and we were unable to recover it. 00:33:17.077 [2024-04-17 10:29:50.211701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.077 [2024-04-17 10:29:50.211870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.211901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.212124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.212288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.212318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.212624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.212768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.212799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.213129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.213285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.213295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.213457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.213723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.213755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.213963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.214178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.214208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.214338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.214525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.214555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.214776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.214991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.215021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.215297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.215555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.215589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.215821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.216039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.216069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.216357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.216667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.216702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.216923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.217081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.217111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.217435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.217634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.217676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.217824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.217971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.218000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.218325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.218476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.218506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.218776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.218931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.218942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.219186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.219329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.219359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.219506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.219620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.219669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.219844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.220084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.220114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.220404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.220608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.220638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.220952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.221154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.221165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.221281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.221389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.221399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.221517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.221681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.221713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.221844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.221947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.221960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.222199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.222345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.222375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.222514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.222720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.222752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.223051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.223216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.223248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.223388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.223664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.078 [2024-04-17 10:29:50.223696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.078 qpair failed and we were unable to recover it. 00:33:17.078 [2024-04-17 10:29:50.223852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.224015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.224024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.079 qpair failed and we were unable to recover it. 00:33:17.079 [2024-04-17 10:29:50.224149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.224366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.224396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.079 qpair failed and we were unable to recover it. 00:33:17.079 [2024-04-17 10:29:50.224670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.224833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.224863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.079 qpair failed and we were unable to recover it. 00:33:17.079 [2024-04-17 10:29:50.225083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.225250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.225290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.079 qpair failed and we were unable to recover it. 00:33:17.079 [2024-04-17 10:29:50.225577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.225781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.225811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.079 qpair failed and we were unable to recover it. 00:33:17.079 [2024-04-17 10:29:50.225956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.226246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.226275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.079 qpair failed and we were unable to recover it. 00:33:17.079 [2024-04-17 10:29:50.226593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.226770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.226802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.079 qpair failed and we were unable to recover it. 00:33:17.079 [2024-04-17 10:29:50.227032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.227182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.227212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.079 qpair failed and we were unable to recover it. 00:33:17.079 [2024-04-17 10:29:50.227513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.227732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.227765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.079 qpair failed and we were unable to recover it. 00:33:17.079 [2024-04-17 10:29:50.227991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.228109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.228119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.079 qpair failed and we were unable to recover it. 00:33:17.079 [2024-04-17 10:29:50.228300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.228409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.228439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.079 qpair failed and we were unable to recover it. 00:33:17.079 [2024-04-17 10:29:50.228664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.228804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.228834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.079 qpair failed and we were unable to recover it. 00:33:17.079 [2024-04-17 10:29:50.229139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.229353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.229382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.079 qpair failed and we were unable to recover it. 00:33:17.079 [2024-04-17 10:29:50.229518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.229690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.229729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.079 qpair failed and we were unable to recover it. 00:33:17.079 [2024-04-17 10:29:50.229894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.230100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.230131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.079 qpair failed and we were unable to recover it. 00:33:17.079 [2024-04-17 10:29:50.230346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.230614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.230656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.079 qpair failed and we were unable to recover it. 00:33:17.079 [2024-04-17 10:29:50.230886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.231173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.231203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.079 qpair failed and we were unable to recover it. 00:33:17.079 [2024-04-17 10:29:50.231488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.231687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.231720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.079 qpair failed and we were unable to recover it. 00:33:17.079 [2024-04-17 10:29:50.231942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.232248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.232278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.079 qpair failed and we were unable to recover it. 00:33:17.079 [2024-04-17 10:29:50.232496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.232700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.232731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.079 qpair failed and we were unable to recover it. 00:33:17.079 [2024-04-17 10:29:50.232898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.233043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.233088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.079 qpair failed and we were unable to recover it. 00:33:17.079 [2024-04-17 10:29:50.233253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.233529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.233559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.079 qpair failed and we were unable to recover it. 00:33:17.079 [2024-04-17 10:29:50.233860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.234071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.234112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.079 qpair failed and we were unable to recover it. 00:33:17.079 [2024-04-17 10:29:50.234336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.234555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.234584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.079 qpair failed and we were unable to recover it. 00:33:17.079 [2024-04-17 10:29:50.234810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.079 [2024-04-17 10:29:50.235082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.235111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.080 qpair failed and we were unable to recover it. 00:33:17.080 [2024-04-17 10:29:50.235372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.235496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.235506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.080 qpair failed and we were unable to recover it. 00:33:17.080 [2024-04-17 10:29:50.235672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.235778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.235788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.080 qpair failed and we were unable to recover it. 00:33:17.080 [2024-04-17 10:29:50.235953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.236181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.236211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.080 qpair failed and we were unable to recover it. 00:33:17.080 [2024-04-17 10:29:50.236434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.236620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.236662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.080 qpair failed and we were unable to recover it. 00:33:17.080 [2024-04-17 10:29:50.236986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.237220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.237250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.080 qpair failed and we were unable to recover it. 00:33:17.080 [2024-04-17 10:29:50.237445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.237652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.237689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.080 qpair failed and we were unable to recover it. 00:33:17.080 [2024-04-17 10:29:50.237854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.238133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.238163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.080 qpair failed and we were unable to recover it. 00:33:17.080 [2024-04-17 10:29:50.238433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.238545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.238557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.080 qpair failed and we were unable to recover it. 00:33:17.080 [2024-04-17 10:29:50.238897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.239127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.239138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.080 qpair failed and we were unable to recover it. 00:33:17.080 [2024-04-17 10:29:50.239306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.239463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.239493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.080 qpair failed and we were unable to recover it. 00:33:17.080 [2024-04-17 10:29:50.239706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.239930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.239960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.080 qpair failed and we were unable to recover it. 00:33:17.080 [2024-04-17 10:29:50.240127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.240271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.240301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.080 qpair failed and we were unable to recover it. 00:33:17.080 [2024-04-17 10:29:50.240510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.240686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.240700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.080 qpair failed and we were unable to recover it. 00:33:17.080 [2024-04-17 10:29:50.240879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.241074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.241084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.080 qpair failed and we were unable to recover it. 00:33:17.080 [2024-04-17 10:29:50.241194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.241315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.241326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.080 qpair failed and we were unable to recover it. 00:33:17.080 [2024-04-17 10:29:50.241449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.241739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.241777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.080 qpair failed and we were unable to recover it. 00:33:17.080 [2024-04-17 10:29:50.242051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.242353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.242383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.080 qpair failed and we were unable to recover it. 00:33:17.080 [2024-04-17 10:29:50.242635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.242860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.242895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.080 qpair failed and we were unable to recover it. 00:33:17.080 [2024-04-17 10:29:50.243152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.243441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.243451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.080 qpair failed and we were unable to recover it. 00:33:17.080 [2024-04-17 10:29:50.243568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.243809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.243842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.080 qpair failed and we were unable to recover it. 00:33:17.080 [2024-04-17 10:29:50.244071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.244230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.244260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.080 qpair failed and we were unable to recover it. 00:33:17.080 [2024-04-17 10:29:50.244400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.244641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.244680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.080 qpair failed and we were unable to recover it. 00:33:17.080 [2024-04-17 10:29:50.244834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.245049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.245081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.080 qpair failed and we were unable to recover it. 00:33:17.080 [2024-04-17 10:29:50.245329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.245666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.245698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.080 qpair failed and we were unable to recover it. 00:33:17.080 [2024-04-17 10:29:50.245924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.246153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.246192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.080 qpair failed and we were unable to recover it. 00:33:17.080 [2024-04-17 10:29:50.246273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.246377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.246389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.080 qpair failed and we were unable to recover it. 00:33:17.080 [2024-04-17 10:29:50.246558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.080 [2024-04-17 10:29:50.246820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.246831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.246949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.247112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.247142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.247318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.247545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.247574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.247880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.248035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.248064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.248284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.248444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.248477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.248626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.248859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.248891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.249154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.249361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.249391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.249688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.249846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.249877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.250185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.250387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.250426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.250604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.250707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.250717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.250841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.250970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.250980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.251169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.251442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.251472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.251698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.251919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.251949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.252285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.252498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.252508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.252620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.252763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.252806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.253021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.253288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.253318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.253524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.253833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.253864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.254141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.254285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.254316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.254534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.254705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.254715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.254878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.255010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.255047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.255217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.255560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.255590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.255908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.256182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.256192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.256431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.256675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.256707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.257012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.257277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.257311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.257477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.257718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.257751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.257928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.258154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.258184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.258358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.258553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.258563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.258745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.258870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.258880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.081 [2024-04-17 10:29:50.259003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.259184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.081 [2024-04-17 10:29:50.259194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.081 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.259300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.259475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.259487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.082 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.259680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.259859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.259889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.082 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.260043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.260197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.260228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.082 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.260393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.260591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.260602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.082 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.260774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.261038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.261068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.082 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.261226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.261433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.261464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.082 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.261747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.261973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.262004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.082 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.262228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.262438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.262468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.082 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.262684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.262917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.262950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.082 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.263180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.263370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.263400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.082 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.263681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.263834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.263845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.082 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.264032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.264190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.264220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.082 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.264463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.264669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.264703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.082 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.264981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.265224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.265254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.082 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.265499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.265797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.265828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.082 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.266101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.266375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.266406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.082 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.266714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.266932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.266962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.082 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.267223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.267536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.267565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.082 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.267836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.267980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.268010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.082 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.268224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.268429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.268459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.082 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.268689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.268834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.268864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.082 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.269138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.269345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.269375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.082 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.269592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.269811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.269847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.082 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.270105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.270269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.270299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.082 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.270560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.270706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.270738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.082 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.271042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.271288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.271318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.082 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.271474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.271697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.271746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.082 qpair failed and we were unable to recover it. 00:33:17.082 [2024-04-17 10:29:50.271953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.082 [2024-04-17 10:29:50.272247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.272276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.083 qpair failed and we were unable to recover it. 00:33:17.083 [2024-04-17 10:29:50.272575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.272777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.272788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.083 qpair failed and we were unable to recover it. 00:33:17.083 [2024-04-17 10:29:50.272951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.273238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.273248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.083 qpair failed and we were unable to recover it. 00:33:17.083 [2024-04-17 10:29:50.273413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.273701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.273732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.083 qpair failed and we were unable to recover it. 00:33:17.083 [2024-04-17 10:29:50.273895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.274041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.274070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.083 qpair failed and we were unable to recover it. 00:33:17.083 [2024-04-17 10:29:50.274273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.274541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.274570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.083 qpair failed and we were unable to recover it. 00:33:17.083 [2024-04-17 10:29:50.274829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.274986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.275016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.083 qpair failed and we were unable to recover it. 00:33:17.083 [2024-04-17 10:29:50.275219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.275343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.275372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.083 qpair failed and we were unable to recover it. 00:33:17.083 [2024-04-17 10:29:50.275620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.275849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.275883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.083 qpair failed and we were unable to recover it. 00:33:17.083 [2024-04-17 10:29:50.276113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.276383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.276413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.083 qpair failed and we were unable to recover it. 00:33:17.083 [2024-04-17 10:29:50.276546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.276772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.276803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.083 qpair failed and we were unable to recover it. 00:33:17.083 [2024-04-17 10:29:50.277090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.277339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.277370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.083 qpair failed and we were unable to recover it. 00:33:17.083 [2024-04-17 10:29:50.277587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.277851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.277881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.083 qpair failed and we were unable to recover it. 00:33:17.083 [2024-04-17 10:29:50.278021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.278202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.278235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.083 qpair failed and we were unable to recover it. 00:33:17.083 [2024-04-17 10:29:50.278475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.278696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.278727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.083 qpair failed and we were unable to recover it. 00:33:17.083 [2024-04-17 10:29:50.278899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.279198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.279239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.083 qpair failed and we were unable to recover it. 00:33:17.083 [2024-04-17 10:29:50.279474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.279676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.279687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.083 qpair failed and we were unable to recover it. 00:33:17.083 [2024-04-17 10:29:50.279888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.279996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.280007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.083 qpair failed and we were unable to recover it. 00:33:17.083 [2024-04-17 10:29:50.280171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.280274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.280285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.083 qpair failed and we were unable to recover it. 00:33:17.083 [2024-04-17 10:29:50.280461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.280663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.280694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.083 qpair failed and we were unable to recover it. 00:33:17.083 [2024-04-17 10:29:50.280902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.281086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.281116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.083 qpair failed and we were unable to recover it. 00:33:17.083 [2024-04-17 10:29:50.281236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.281513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.281547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.083 qpair failed and we were unable to recover it. 00:33:17.083 [2024-04-17 10:29:50.281710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.281984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.282014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.083 qpair failed and we were unable to recover it. 00:33:17.083 [2024-04-17 10:29:50.282289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.282508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.282519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.083 qpair failed and we were unable to recover it. 00:33:17.083 [2024-04-17 10:29:50.282723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.282928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.282958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.083 qpair failed and we were unable to recover it. 00:33:17.083 [2024-04-17 10:29:50.283176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.283377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.283407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.083 qpair failed and we were unable to recover it. 00:33:17.083 [2024-04-17 10:29:50.283597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.283759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.083 [2024-04-17 10:29:50.283769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.084 qpair failed and we were unable to recover it. 00:33:17.084 [2024-04-17 10:29:50.283939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.284153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.284183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.084 qpair failed and we were unable to recover it. 00:33:17.084 [2024-04-17 10:29:50.284396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.284686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.284723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.084 qpair failed and we were unable to recover it. 00:33:17.084 [2024-04-17 10:29:50.284947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.285158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.285188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.084 qpair failed and we were unable to recover it. 00:33:17.084 [2024-04-17 10:29:50.285345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.285456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.285467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.084 qpair failed and we were unable to recover it. 00:33:17.084 [2024-04-17 10:29:50.285678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.285898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.285928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.084 qpair failed and we were unable to recover it. 00:33:17.084 [2024-04-17 10:29:50.286214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.286428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.286459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.084 qpair failed and we were unable to recover it. 00:33:17.084 [2024-04-17 10:29:50.286764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.286897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.286930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.084 qpair failed and we were unable to recover it. 00:33:17.084 [2024-04-17 10:29:50.287090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.287395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.287426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.084 qpair failed and we were unable to recover it. 00:33:17.084 [2024-04-17 10:29:50.287721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.287992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.288024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.084 qpair failed and we were unable to recover it. 00:33:17.084 [2024-04-17 10:29:50.288239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.288507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.288537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.084 qpair failed and we were unable to recover it. 00:33:17.084 [2024-04-17 10:29:50.288811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.288961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.288996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.084 qpair failed and we were unable to recover it. 00:33:17.084 [2024-04-17 10:29:50.289185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.289289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.289319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.084 qpair failed and we were unable to recover it. 00:33:17.084 [2024-04-17 10:29:50.289565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.289715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.289746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.084 qpair failed and we were unable to recover it. 00:33:17.084 [2024-04-17 10:29:50.289890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.290033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.290062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.084 qpair failed and we were unable to recover it. 00:33:17.084 [2024-04-17 10:29:50.290363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.290558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.290570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.084 qpair failed and we were unable to recover it. 00:33:17.084 [2024-04-17 10:29:50.290675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.290861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.290871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.084 qpair failed and we were unable to recover it. 00:33:17.084 [2024-04-17 10:29:50.291103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.291283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.291293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.084 qpair failed and we were unable to recover it. 00:33:17.084 [2024-04-17 10:29:50.291602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.291774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.291806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.084 qpair failed and we were unable to recover it. 00:33:17.084 [2024-04-17 10:29:50.292020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.292284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.292293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.084 qpair failed and we were unable to recover it. 00:33:17.084 [2024-04-17 10:29:50.292464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.292672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.292703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.084 qpair failed and we were unable to recover it. 00:33:17.084 [2024-04-17 10:29:50.292858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.293068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.293097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.084 qpair failed and we were unable to recover it. 00:33:17.084 [2024-04-17 10:29:50.293407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.293590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.084 [2024-04-17 10:29:50.293619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.084 qpair failed and we were unable to recover it. 00:33:17.084 [2024-04-17 10:29:50.293777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.294052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.294082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.085 qpair failed and we were unable to recover it. 00:33:17.085 [2024-04-17 10:29:50.294224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.294487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.294517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.085 qpair failed and we were unable to recover it. 00:33:17.085 [2024-04-17 10:29:50.294818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.295037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.295068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.085 qpair failed and we were unable to recover it. 00:33:17.085 [2024-04-17 10:29:50.295270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.295542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.295572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.085 qpair failed and we were unable to recover it. 00:33:17.085 [2024-04-17 10:29:50.295735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.296026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.296061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.085 qpair failed and we were unable to recover it. 00:33:17.085 [2024-04-17 10:29:50.296214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.296337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.296347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.085 qpair failed and we were unable to recover it. 00:33:17.085 [2024-04-17 10:29:50.296530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.296745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.296776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.085 qpair failed and we were unable to recover it. 00:33:17.085 [2024-04-17 10:29:50.296991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.297200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.297230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.085 qpair failed and we were unable to recover it. 00:33:17.085 [2024-04-17 10:29:50.297433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.297633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.297671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.085 qpair failed and we were unable to recover it. 00:33:17.085 [2024-04-17 10:29:50.297984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.298263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.298273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.085 qpair failed and we were unable to recover it. 00:33:17.085 [2024-04-17 10:29:50.298487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.298638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.298685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.085 qpair failed and we were unable to recover it. 00:33:17.085 [2024-04-17 10:29:50.298932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.299150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.299188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.085 qpair failed and we were unable to recover it. 00:33:17.085 [2024-04-17 10:29:50.299444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.299665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.299697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.085 qpair failed and we were unable to recover it. 00:33:17.085 [2024-04-17 10:29:50.299999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.300217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.300258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.085 qpair failed and we were unable to recover it. 00:33:17.085 [2024-04-17 10:29:50.300507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.300683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.300694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.085 qpair failed and we were unable to recover it. 00:33:17.085 [2024-04-17 10:29:50.300875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.301058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.301068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.085 qpair failed and we were unable to recover it. 00:33:17.085 [2024-04-17 10:29:50.301182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.301424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.301436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.085 qpair failed and we were unable to recover it. 00:33:17.085 [2024-04-17 10:29:50.301596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.301754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.301765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.085 qpair failed and we were unable to recover it. 00:33:17.085 [2024-04-17 10:29:50.301956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.302040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.302051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.085 qpair failed and we were unable to recover it. 00:33:17.085 [2024-04-17 10:29:50.302155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.302282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.302312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.085 qpair failed and we were unable to recover it. 00:33:17.085 [2024-04-17 10:29:50.302480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.302785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.302818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.085 qpair failed and we were unable to recover it. 00:33:17.085 [2024-04-17 10:29:50.303044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.303250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.303281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.085 qpair failed and we were unable to recover it. 00:33:17.085 [2024-04-17 10:29:50.303591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.303813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.303824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.085 qpair failed and we were unable to recover it. 00:33:17.085 [2024-04-17 10:29:50.303932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.304227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.304257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.085 qpair failed and we were unable to recover it. 00:33:17.085 [2024-04-17 10:29:50.304428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.304575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.304585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.085 qpair failed and we were unable to recover it. 00:33:17.085 [2024-04-17 10:29:50.304751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.304929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.304942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.085 qpair failed and we were unable to recover it. 00:33:17.085 [2024-04-17 10:29:50.305147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.305325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.085 [2024-04-17 10:29:50.305355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.085 qpair failed and we were unable to recover it. 00:33:17.086 [2024-04-17 10:29:50.305498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.305664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.305695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.086 qpair failed and we were unable to recover it. 00:33:17.086 [2024-04-17 10:29:50.305864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.306089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.306099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.086 qpair failed and we were unable to recover it. 00:33:17.086 [2024-04-17 10:29:50.306267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.306477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.306508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.086 qpair failed and we were unable to recover it. 00:33:17.086 [2024-04-17 10:29:50.306667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.306817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.306847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.086 qpair failed and we were unable to recover it. 00:33:17.086 [2024-04-17 10:29:50.307086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.307250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.307280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.086 qpair failed and we were unable to recover it. 00:33:17.086 [2024-04-17 10:29:50.307537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.307714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.307726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.086 qpair failed and we were unable to recover it. 00:33:17.086 [2024-04-17 10:29:50.307887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.308092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.308122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.086 qpair failed and we were unable to recover it. 00:33:17.086 [2024-04-17 10:29:50.308262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.308540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.308570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.086 qpair failed and we were unable to recover it. 00:33:17.086 [2024-04-17 10:29:50.308791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.308941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.308977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.086 qpair failed and we were unable to recover it. 00:33:17.086 [2024-04-17 10:29:50.309120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.309439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.309472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.086 qpair failed and we were unable to recover it. 00:33:17.086 [2024-04-17 10:29:50.309695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.309851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.309881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.086 qpair failed and we were unable to recover it. 00:33:17.086 [2024-04-17 10:29:50.310048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.310267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.310297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.086 qpair failed and we were unable to recover it. 00:33:17.086 [2024-04-17 10:29:50.310576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.310848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.310879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.086 qpair failed and we were unable to recover it. 00:33:17.086 [2024-04-17 10:29:50.311185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.311451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.311462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.086 qpair failed and we were unable to recover it. 00:33:17.086 [2024-04-17 10:29:50.311622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.311866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.311897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.086 qpair failed and we were unable to recover it. 00:33:17.086 [2024-04-17 10:29:50.312127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.312349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.312379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.086 qpair failed and we were unable to recover it. 00:33:17.086 [2024-04-17 10:29:50.312574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.312745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.312779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.086 qpair failed and we were unable to recover it. 00:33:17.086 [2024-04-17 10:29:50.313059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.313287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.313317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.086 qpair failed and we were unable to recover it. 00:33:17.086 [2024-04-17 10:29:50.313478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.313754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.313796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.086 qpair failed and we were unable to recover it. 00:33:17.086 [2024-04-17 10:29:50.314102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.314343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.314373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.086 qpair failed and we were unable to recover it. 00:33:17.086 [2024-04-17 10:29:50.314534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.314777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.314817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.086 qpair failed and we were unable to recover it. 00:33:17.086 [2024-04-17 10:29:50.315143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.315358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.315387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.086 qpair failed and we were unable to recover it. 00:33:17.086 [2024-04-17 10:29:50.315614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.315776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.315787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.086 qpair failed and we were unable to recover it. 00:33:17.086 [2024-04-17 10:29:50.315889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.316070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.316101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.086 qpair failed and we were unable to recover it. 00:33:17.086 [2024-04-17 10:29:50.316333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.316548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.316578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.086 qpair failed and we were unable to recover it. 00:33:17.086 [2024-04-17 10:29:50.316753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.317057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.317092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.086 qpair failed and we were unable to recover it. 00:33:17.086 [2024-04-17 10:29:50.317334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.317538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.086 [2024-04-17 10:29:50.317568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.087 qpair failed and we were unable to recover it. 00:33:17.087 [2024-04-17 10:29:50.317796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.318099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.318140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.087 qpair failed and we were unable to recover it. 00:33:17.087 [2024-04-17 10:29:50.318426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.318641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.318687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.087 qpair failed and we were unable to recover it. 00:33:17.087 [2024-04-17 10:29:50.318914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.319222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.319263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.087 qpair failed and we were unable to recover it. 00:33:17.087 [2024-04-17 10:29:50.319521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.319738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.319769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.087 qpair failed and we were unable to recover it. 00:33:17.087 [2024-04-17 10:29:50.320042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.320242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.320272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.087 qpair failed and we were unable to recover it. 00:33:17.087 [2024-04-17 10:29:50.320521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.320723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.320755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.087 qpair failed and we were unable to recover it. 00:33:17.087 [2024-04-17 10:29:50.320955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.321121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.321150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.087 qpair failed and we were unable to recover it. 00:33:17.087 [2024-04-17 10:29:50.321356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.321638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.321680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.087 qpair failed and we were unable to recover it. 00:33:17.087 [2024-04-17 10:29:50.321893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.322240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.322275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.087 qpair failed and we were unable to recover it. 00:33:17.087 [2024-04-17 10:29:50.322382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.322487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.322497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.087 qpair failed and we were unable to recover it. 00:33:17.087 [2024-04-17 10:29:50.322669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.322935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.322946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.087 qpair failed and we were unable to recover it. 00:33:17.087 [2024-04-17 10:29:50.323200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.323406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.323437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.087 qpair failed and we were unable to recover it. 00:33:17.087 [2024-04-17 10:29:50.323693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.323843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.323873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.087 qpair failed and we were unable to recover it. 00:33:17.087 [2024-04-17 10:29:50.324094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.324249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.324280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.087 qpair failed and we were unable to recover it. 00:33:17.087 [2024-04-17 10:29:50.324410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.324592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.324602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.087 qpair failed and we were unable to recover it. 00:33:17.087 [2024-04-17 10:29:50.324831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.324926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.324937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.087 qpair failed and we were unable to recover it. 00:33:17.087 [2024-04-17 10:29:50.325127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.325307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.325318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.087 qpair failed and we were unable to recover it. 00:33:17.087 [2024-04-17 10:29:50.325539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.325742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.325784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.087 qpair failed and we were unable to recover it. 00:33:17.087 [2024-04-17 10:29:50.326034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.326193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.326222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.087 qpair failed and we were unable to recover it. 00:33:17.087 [2024-04-17 10:29:50.326373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.326601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.326611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.087 qpair failed and we were unable to recover it. 00:33:17.087 [2024-04-17 10:29:50.326814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.326993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.327026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.087 qpair failed and we were unable to recover it. 00:33:17.087 [2024-04-17 10:29:50.327267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.327475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.327505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.087 qpair failed and we were unable to recover it. 00:33:17.087 [2024-04-17 10:29:50.327734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.328042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.328054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.087 qpair failed and we were unable to recover it. 00:33:17.087 [2024-04-17 10:29:50.328146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.328311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.328320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.087 qpair failed and we were unable to recover it. 00:33:17.087 [2024-04-17 10:29:50.328463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.328700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.087 [2024-04-17 10:29:50.328731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.087 qpair failed and we were unable to recover it. 00:33:17.087 [2024-04-17 10:29:50.328938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.329171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.329205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.088 qpair failed and we were unable to recover it. 00:33:17.088 [2024-04-17 10:29:50.329543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.329762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.329794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.088 qpair failed and we were unable to recover it. 00:33:17.088 [2024-04-17 10:29:50.329940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.330237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.330278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.088 qpair failed and we were unable to recover it. 00:33:17.088 [2024-04-17 10:29:50.330555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.330740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.330773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.088 qpair failed and we were unable to recover it. 00:33:17.088 [2024-04-17 10:29:50.330988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.331256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.331287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.088 qpair failed and we were unable to recover it. 00:33:17.088 [2024-04-17 10:29:50.331442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.331585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.331614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.088 qpair failed and we were unable to recover it. 00:33:17.088 [2024-04-17 10:29:50.331858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.332128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.332158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.088 qpair failed and we were unable to recover it. 00:33:17.088 [2024-04-17 10:29:50.332443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.332673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.332706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.088 qpair failed and we were unable to recover it. 00:33:17.088 [2024-04-17 10:29:50.332935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.333172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.333202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.088 qpair failed and we were unable to recover it. 00:33:17.088 [2024-04-17 10:29:50.333436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.333635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.333656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.088 qpair failed and we were unable to recover it. 00:33:17.088 [2024-04-17 10:29:50.333835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.334032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.334041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.088 qpair failed and we were unable to recover it. 00:33:17.088 [2024-04-17 10:29:50.334224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.334339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.334348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.088 qpair failed and we were unable to recover it. 00:33:17.088 [2024-04-17 10:29:50.334464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.334577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.334610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.088 qpair failed and we were unable to recover it. 00:33:17.088 [2024-04-17 10:29:50.334862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.335109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.335140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.088 qpair failed and we were unable to recover it. 00:33:17.088 [2024-04-17 10:29:50.335303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.335514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.335524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.088 qpair failed and we were unable to recover it. 00:33:17.088 [2024-04-17 10:29:50.335704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.335881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.335892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.088 qpair failed and we were unable to recover it. 00:33:17.088 [2024-04-17 10:29:50.336000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.336108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.336118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.088 qpair failed and we were unable to recover it. 00:33:17.088 [2024-04-17 10:29:50.336393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.336600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.336630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.088 qpair failed and we were unable to recover it. 00:33:17.088 [2024-04-17 10:29:50.336804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.337043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.337076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.088 qpair failed and we were unable to recover it. 00:33:17.088 [2024-04-17 10:29:50.337230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.337333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.337343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.088 qpair failed and we were unable to recover it. 00:33:17.088 [2024-04-17 10:29:50.337449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.337739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.337772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.088 qpair failed and we were unable to recover it. 00:33:17.088 [2024-04-17 10:29:50.337986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.338221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.338254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.088 qpair failed and we were unable to recover it. 00:33:17.088 [2024-04-17 10:29:50.338482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.338632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.338676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.088 qpair failed and we were unable to recover it. 00:33:17.088 [2024-04-17 10:29:50.338904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.339068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.339105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.088 qpair failed and we were unable to recover it. 00:33:17.088 [2024-04-17 10:29:50.339284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.339452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.339483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.088 qpair failed and we were unable to recover it. 00:33:17.088 [2024-04-17 10:29:50.339759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.340053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.340082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.088 qpair failed and we were unable to recover it. 00:33:17.088 [2024-04-17 10:29:50.340291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.340534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.088 [2024-04-17 10:29:50.340564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.089 qpair failed and we were unable to recover it. 00:33:17.089 [2024-04-17 10:29:50.340745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.341026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.341056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.089 qpair failed and we were unable to recover it. 00:33:17.089 [2024-04-17 10:29:50.341271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.341430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.341464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.089 qpair failed and we were unable to recover it. 00:33:17.089 [2024-04-17 10:29:50.341602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.341922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.341954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.089 qpair failed and we were unable to recover it. 00:33:17.089 [2024-04-17 10:29:50.342186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.342398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.342428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.089 qpair failed and we were unable to recover it. 00:33:17.089 [2024-04-17 10:29:50.342703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.342872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.342902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.089 qpair failed and we were unable to recover it. 00:33:17.089 [2024-04-17 10:29:50.343046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.343254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.343284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.089 qpair failed and we were unable to recover it. 00:33:17.089 [2024-04-17 10:29:50.343488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.343626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.343673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.089 qpair failed and we were unable to recover it. 00:33:17.089 [2024-04-17 10:29:50.343841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.344047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.344076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.089 qpair failed and we were unable to recover it. 00:33:17.089 [2024-04-17 10:29:50.344319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.344488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.344498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.089 qpair failed and we were unable to recover it. 00:33:17.089 [2024-04-17 10:29:50.344603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.344783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.344794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.089 qpair failed and we were unable to recover it. 00:33:17.089 [2024-04-17 10:29:50.344972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.345158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.345188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.089 qpair failed and we were unable to recover it. 00:33:17.089 [2024-04-17 10:29:50.345328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.345595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.345625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.089 qpair failed and we were unable to recover it. 00:33:17.089 [2024-04-17 10:29:50.345795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.346005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.346036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.089 qpair failed and we were unable to recover it. 00:33:17.089 [2024-04-17 10:29:50.346194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.346357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.346368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.089 qpair failed and we were unable to recover it. 00:33:17.089 [2024-04-17 10:29:50.346484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.346721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.346753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.089 qpair failed and we were unable to recover it. 00:33:17.089 [2024-04-17 10:29:50.346960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.347117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.347149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.089 qpair failed and we were unable to recover it. 00:33:17.089 [2024-04-17 10:29:50.347427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.347630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.347639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.089 qpair failed and we were unable to recover it. 00:33:17.089 [2024-04-17 10:29:50.347748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.347854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.347866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.089 qpair failed and we were unable to recover it. 00:33:17.089 [2024-04-17 10:29:50.348030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.348208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.348219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.089 qpair failed and we were unable to recover it. 00:33:17.089 [2024-04-17 10:29:50.348339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.348455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.348465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.089 qpair failed and we were unable to recover it. 00:33:17.089 [2024-04-17 10:29:50.348575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.348749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.348760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.089 qpair failed and we were unable to recover it. 00:33:17.089 [2024-04-17 10:29:50.349465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.349602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.349614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.089 qpair failed and we were unable to recover it. 00:33:17.089 [2024-04-17 10:29:50.349787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.089 [2024-04-17 10:29:50.350021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.350052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.090 qpair failed and we were unable to recover it. 00:33:17.090 [2024-04-17 10:29:50.350355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.350505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.350517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.090 qpair failed and we were unable to recover it. 00:33:17.090 [2024-04-17 10:29:50.350635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.350797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.350829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.090 qpair failed and we were unable to recover it. 00:33:17.090 [2024-04-17 10:29:50.351035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.351304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.351334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.090 qpair failed and we were unable to recover it. 00:33:17.090 [2024-04-17 10:29:50.351537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.351651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.351661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.090 qpair failed and we were unable to recover it. 00:33:17.090 [2024-04-17 10:29:50.351849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.351941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.351951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.090 qpair failed and we were unable to recover it. 00:33:17.090 [2024-04-17 10:29:50.352041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.352244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.352254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.090 qpair failed and we were unable to recover it. 00:33:17.090 [2024-04-17 10:29:50.352424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.352611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.352640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.090 qpair failed and we were unable to recover it. 00:33:17.090 [2024-04-17 10:29:50.352936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.353161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.353191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.090 qpair failed and we were unable to recover it. 00:33:17.090 [2024-04-17 10:29:50.353526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.353723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.353736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.090 qpair failed and we were unable to recover it. 00:33:17.090 [2024-04-17 10:29:50.353916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.354106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.354117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.090 qpair failed and we were unable to recover it. 00:33:17.090 [2024-04-17 10:29:50.354322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.354545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.354585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.090 qpair failed and we were unable to recover it. 00:33:17.090 [2024-04-17 10:29:50.354769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.355013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.355042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.090 qpair failed and we were unable to recover it. 00:33:17.090 [2024-04-17 10:29:50.355239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.355403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.355432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.090 qpair failed and we were unable to recover it. 00:33:17.090 [2024-04-17 10:29:50.355664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.355886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.355917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.090 qpair failed and we were unable to recover it. 00:33:17.090 [2024-04-17 10:29:50.356195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.356382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.356392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.090 qpair failed and we were unable to recover it. 00:33:17.090 [2024-04-17 10:29:50.356559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.356844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.356878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.090 qpair failed and we were unable to recover it. 00:33:17.090 [2024-04-17 10:29:50.357177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.357314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.357343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.090 qpair failed and we were unable to recover it. 00:33:17.090 [2024-04-17 10:29:50.357528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.357753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.357788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.090 qpair failed and we were unable to recover it. 00:33:17.090 [2024-04-17 10:29:50.357991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.358206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.358236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.090 qpair failed and we were unable to recover it. 00:33:17.090 [2024-04-17 10:29:50.358590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.358890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.358922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.090 qpair failed and we were unable to recover it. 00:33:17.090 [2024-04-17 10:29:50.359143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.359425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.359455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.090 qpair failed and we were unable to recover it. 00:33:17.090 [2024-04-17 10:29:50.359674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.359893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.359923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.090 qpair failed and we were unable to recover it. 00:33:17.090 [2024-04-17 10:29:50.360090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.360305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.360336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.090 qpair failed and we were unable to recover it. 00:33:17.090 [2024-04-17 10:29:50.360573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.360781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.360814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.090 qpair failed and we were unable to recover it. 00:33:17.090 [2024-04-17 10:29:50.360981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.361277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.361312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.090 qpair failed and we were unable to recover it. 00:33:17.090 [2024-04-17 10:29:50.361543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.361688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.090 [2024-04-17 10:29:50.361720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.090 qpair failed and we were unable to recover it. 00:33:17.091 [2024-04-17 10:29:50.361870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.362084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.362113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.091 qpair failed and we were unable to recover it. 00:33:17.091 [2024-04-17 10:29:50.362350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.362575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.362585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.091 qpair failed and we were unable to recover it. 00:33:17.091 [2024-04-17 10:29:50.362765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.363016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.363047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.091 qpair failed and we were unable to recover it. 00:33:17.091 [2024-04-17 10:29:50.363186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.363351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.363393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.091 qpair failed and we were unable to recover it. 00:33:17.091 [2024-04-17 10:29:50.363616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.363745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.363777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.091 qpair failed and we were unable to recover it. 00:33:17.091 [2024-04-17 10:29:50.364080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.364250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.364280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.091 qpair failed and we were unable to recover it. 00:33:17.091 [2024-04-17 10:29:50.364510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.364742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.364774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.091 qpair failed and we were unable to recover it. 00:33:17.091 [2024-04-17 10:29:50.364999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.365134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.365165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.091 qpair failed and we were unable to recover it. 00:33:17.091 [2024-04-17 10:29:50.365334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.365565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.365602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.091 qpair failed and we were unable to recover it. 00:33:17.091 [2024-04-17 10:29:50.365851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.366150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.366180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.091 qpair failed and we were unable to recover it. 00:33:17.091 [2024-04-17 10:29:50.366405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.366642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.366698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.091 qpair failed and we were unable to recover it. 00:33:17.091 [2024-04-17 10:29:50.366875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.367135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.367165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.091 qpair failed and we were unable to recover it. 00:33:17.091 [2024-04-17 10:29:50.367441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.367703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.367735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.091 qpair failed and we were unable to recover it. 00:33:17.091 [2024-04-17 10:29:50.368045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.368267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.368278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.091 qpair failed and we were unable to recover it. 00:33:17.091 [2024-04-17 10:29:50.368399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.368668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.368702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.091 qpair failed and we were unable to recover it. 00:33:17.091 [2024-04-17 10:29:50.368928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.369140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.369172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.091 qpair failed and we were unable to recover it. 00:33:17.091 [2024-04-17 10:29:50.369384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.369489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.369498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.091 qpair failed and we were unable to recover it. 00:33:17.091 [2024-04-17 10:29:50.369598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.369776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.369808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.091 qpair failed and we were unable to recover it. 00:33:17.091 [2024-04-17 10:29:50.370020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.370308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.370339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.091 qpair failed and we were unable to recover it. 00:33:17.091 [2024-04-17 10:29:50.370568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.370789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.370799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.091 qpair failed and we were unable to recover it. 00:33:17.091 [2024-04-17 10:29:50.370992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.371266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.371299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.091 qpair failed and we were unable to recover it. 00:33:17.091 [2024-04-17 10:29:50.371451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.371598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.371635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.091 qpair failed and we were unable to recover it. 00:33:17.091 [2024-04-17 10:29:50.371932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.372169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.372198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.091 qpair failed and we were unable to recover it. 00:33:17.091 [2024-04-17 10:29:50.372410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.372532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.372562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.091 qpair failed and we were unable to recover it. 00:33:17.091 [2024-04-17 10:29:50.372726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.372933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.372964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.091 qpair failed and we were unable to recover it. 00:33:17.091 [2024-04-17 10:29:50.373238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.373413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.091 [2024-04-17 10:29:50.373447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.092 qpair failed and we were unable to recover it. 00:33:17.092 [2024-04-17 10:29:50.373689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.373927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.373957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.092 qpair failed and we were unable to recover it. 00:33:17.092 [2024-04-17 10:29:50.374242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.374535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.374546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.092 qpair failed and we were unable to recover it. 00:33:17.092 [2024-04-17 10:29:50.374655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.374896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.374906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.092 qpair failed and we were unable to recover it. 00:33:17.092 [2024-04-17 10:29:50.375075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.375294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.375325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.092 qpair failed and we were unable to recover it. 00:33:17.092 [2024-04-17 10:29:50.375535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.375793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.375805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.092 qpair failed and we were unable to recover it. 00:33:17.092 [2024-04-17 10:29:50.376058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.376151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.376163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.092 qpair failed and we were unable to recover it. 00:33:17.092 [2024-04-17 10:29:50.376306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.376534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.376545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.092 qpair failed and we were unable to recover it. 00:33:17.092 [2024-04-17 10:29:50.376724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.376817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.376828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.092 qpair failed and we were unable to recover it. 00:33:17.092 [2024-04-17 10:29:50.377007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.377246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.377276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.092 qpair failed and we were unable to recover it. 00:33:17.092 [2024-04-17 10:29:50.377419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.377582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.377612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.092 qpair failed and we were unable to recover it. 00:33:17.092 [2024-04-17 10:29:50.377853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.378030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.378041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.092 qpair failed and we were unable to recover it. 00:33:17.092 [2024-04-17 10:29:50.378224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.378399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.378409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.092 qpair failed and we were unable to recover it. 00:33:17.092 [2024-04-17 10:29:50.378572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.378860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.378892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.092 qpair failed and we were unable to recover it. 00:33:17.092 [2024-04-17 10:29:50.379140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.379342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.379372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.092 qpair failed and we were unable to recover it. 00:33:17.092 [2024-04-17 10:29:50.379575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.379741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.379751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.092 qpair failed and we were unable to recover it. 00:33:17.092 [2024-04-17 10:29:50.379918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.380034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.092 [2024-04-17 10:29:50.380047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.092 qpair failed and we were unable to recover it. 00:33:17.368 [2024-04-17 10:29:50.380278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.380441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.380451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.368 qpair failed and we were unable to recover it. 00:33:17.368 [2024-04-17 10:29:50.380730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.380828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.380838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.368 qpair failed and we were unable to recover it. 00:33:17.368 [2024-04-17 10:29:50.381078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.381281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.381291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.368 qpair failed and we were unable to recover it. 00:33:17.368 [2024-04-17 10:29:50.381402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.381591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.381602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.368 qpair failed and we were unable to recover it. 00:33:17.368 [2024-04-17 10:29:50.381774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.382039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.382053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.368 qpair failed and we were unable to recover it. 00:33:17.368 [2024-04-17 10:29:50.382237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.382421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.382431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.368 qpair failed and we were unable to recover it. 00:33:17.368 [2024-04-17 10:29:50.382618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.382777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.382789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.368 qpair failed and we were unable to recover it. 00:33:17.368 [2024-04-17 10:29:50.382974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.383133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.383145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.368 qpair failed and we were unable to recover it. 00:33:17.368 [2024-04-17 10:29:50.383345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.383468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.383498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.368 qpair failed and we were unable to recover it. 00:33:17.368 [2024-04-17 10:29:50.383817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.384028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.384065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.368 qpair failed and we were unable to recover it. 00:33:17.368 [2024-04-17 10:29:50.384280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.384464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.384495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.368 qpair failed and we were unable to recover it. 00:33:17.368 [2024-04-17 10:29:50.384775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.384938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.384969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.368 qpair failed and we were unable to recover it. 00:33:17.368 [2024-04-17 10:29:50.385217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.385407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.385419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.368 qpair failed and we were unable to recover it. 00:33:17.368 [2024-04-17 10:29:50.385603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.385767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.385799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.368 qpair failed and we were unable to recover it. 00:33:17.368 [2024-04-17 10:29:50.386043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.386211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.386240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.368 qpair failed and we were unable to recover it. 00:33:17.368 [2024-04-17 10:29:50.386486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.386668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.386714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.368 qpair failed and we were unable to recover it. 00:33:17.368 [2024-04-17 10:29:50.386795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.386909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.386919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.368 qpair failed and we were unable to recover it. 00:33:17.368 [2024-04-17 10:29:50.387029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.387192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.387202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.368 qpair failed and we were unable to recover it. 00:33:17.368 [2024-04-17 10:29:50.387413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.368 [2024-04-17 10:29:50.387517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.387528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.369 qpair failed and we were unable to recover it. 00:33:17.369 [2024-04-17 10:29:50.387760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.387867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.387878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.369 qpair failed and we were unable to recover it. 00:33:17.369 [2024-04-17 10:29:50.388122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.388234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.388247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.369 qpair failed and we were unable to recover it. 00:33:17.369 [2024-04-17 10:29:50.388348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.388541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.388584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.369 qpair failed and we were unable to recover it. 00:33:17.369 [2024-04-17 10:29:50.388891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.389115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.389145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.369 qpair failed and we were unable to recover it. 00:33:17.369 [2024-04-17 10:29:50.389358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.389549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.389580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.369 qpair failed and we were unable to recover it. 00:33:17.369 [2024-04-17 10:29:50.389771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.389987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.390017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.369 qpair failed and we were unable to recover it. 00:33:17.369 [2024-04-17 10:29:50.390294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.390531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.390571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.369 qpair failed and we were unable to recover it. 00:33:17.369 [2024-04-17 10:29:50.390829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.391035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.391045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.369 qpair failed and we were unable to recover it. 00:33:17.369 [2024-04-17 10:29:50.391252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.391371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.391381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.369 qpair failed and we were unable to recover it. 00:33:17.369 [2024-04-17 10:29:50.391567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.391734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.391744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.369 qpair failed and we were unable to recover it. 00:33:17.369 [2024-04-17 10:29:50.391858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.391981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.391992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.369 qpair failed and we were unable to recover it. 00:33:17.369 [2024-04-17 10:29:50.392174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.392283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.392292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.369 qpair failed and we were unable to recover it. 00:33:17.369 [2024-04-17 10:29:50.392461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.392630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.392641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.369 qpair failed and we were unable to recover it. 00:33:17.369 [2024-04-17 10:29:50.392740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.392970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.392983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.369 qpair failed and we were unable to recover it. 00:33:17.369 [2024-04-17 10:29:50.393100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.393345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.393356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.369 qpair failed and we were unable to recover it. 00:33:17.369 [2024-04-17 10:29:50.393463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.393632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.393684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.369 qpair failed and we were unable to recover it. 00:33:17.369 [2024-04-17 10:29:50.393932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.394144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.394177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.369 qpair failed and we were unable to recover it. 00:33:17.369 [2024-04-17 10:29:50.394504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.394666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.394698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.369 qpair failed and we were unable to recover it. 00:33:17.369 [2024-04-17 10:29:50.394914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.395140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.395181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.369 qpair failed and we were unable to recover it. 00:33:17.369 [2024-04-17 10:29:50.395420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.395604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.395614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.369 qpair failed and we were unable to recover it. 00:33:17.369 [2024-04-17 10:29:50.395785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.396042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.396053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.369 qpair failed and we were unable to recover it. 00:33:17.369 [2024-04-17 10:29:50.396200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.396315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.396324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.369 qpair failed and we were unable to recover it. 00:33:17.369 [2024-04-17 10:29:50.396398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.396638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.396653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.369 qpair failed and we were unable to recover it. 00:33:17.369 [2024-04-17 10:29:50.396831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.396910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.396920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.369 qpair failed and we were unable to recover it. 00:33:17.369 [2024-04-17 10:29:50.397044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.397133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.397143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.369 qpair failed and we were unable to recover it. 00:33:17.369 [2024-04-17 10:29:50.397322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.397417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.397428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.369 qpair failed and we were unable to recover it. 00:33:17.369 [2024-04-17 10:29:50.397664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.369 [2024-04-17 10:29:50.397896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.397907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.370 qpair failed and we were unable to recover it. 00:33:17.370 [2024-04-17 10:29:50.398030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.398150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.398180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.370 qpair failed and we were unable to recover it. 00:33:17.370 [2024-04-17 10:29:50.398353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.398457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.398468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.370 qpair failed and we were unable to recover it. 00:33:17.370 [2024-04-17 10:29:50.398637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.398846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.398877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.370 qpair failed and we were unable to recover it. 00:33:17.370 [2024-04-17 10:29:50.399117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.399262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.399272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.370 qpair failed and we were unable to recover it. 00:33:17.370 [2024-04-17 10:29:50.399441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.399676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.399687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.370 qpair failed and we were unable to recover it. 00:33:17.370 [2024-04-17 10:29:50.399794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.399864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.399874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.370 qpair failed and we were unable to recover it. 00:33:17.370 [2024-04-17 10:29:50.399999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.400259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.400269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.370 qpair failed and we were unable to recover it. 00:33:17.370 [2024-04-17 10:29:50.400499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.400614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.400625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.370 qpair failed and we were unable to recover it. 00:33:17.370 [2024-04-17 10:29:50.400834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.401087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.401097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.370 qpair failed and we were unable to recover it. 00:33:17.370 [2024-04-17 10:29:50.401381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.401560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.401570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.370 qpair failed and we were unable to recover it. 00:33:17.370 [2024-04-17 10:29:50.401743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.401841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.401851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.370 qpair failed and we were unable to recover it. 00:33:17.370 [2024-04-17 10:29:50.402039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.402267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.402299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.370 qpair failed and we were unable to recover it. 00:33:17.370 [2024-04-17 10:29:50.402494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.402734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.402777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.370 qpair failed and we were unable to recover it. 00:33:17.370 [2024-04-17 10:29:50.403007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.403232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.403263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.370 qpair failed and we were unable to recover it. 00:33:17.370 [2024-04-17 10:29:50.403458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.403631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.403642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.370 qpair failed and we were unable to recover it. 00:33:17.370 [2024-04-17 10:29:50.403916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.404132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.404162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.370 qpair failed and we were unable to recover it. 00:33:17.370 [2024-04-17 10:29:50.404329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.404508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.404519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.370 qpair failed and we were unable to recover it. 00:33:17.370 [2024-04-17 10:29:50.404750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.404864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.404873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.370 qpair failed and we were unable to recover it. 00:33:17.370 [2024-04-17 10:29:50.405043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.405207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.405217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.370 qpair failed and we were unable to recover it. 00:33:17.370 [2024-04-17 10:29:50.405478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.405640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.405664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.370 qpair failed and we were unable to recover it. 00:33:17.370 [2024-04-17 10:29:50.405843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.406012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.406027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.370 qpair failed and we were unable to recover it. 00:33:17.370 [2024-04-17 10:29:50.406204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.406313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.406324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.370 qpair failed and we were unable to recover it. 00:33:17.370 [2024-04-17 10:29:50.406583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.406756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.406767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.370 qpair failed and we were unable to recover it. 00:33:17.370 [2024-04-17 10:29:50.406927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.407049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.407059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.370 qpair failed and we were unable to recover it. 00:33:17.370 [2024-04-17 10:29:50.407229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.407326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.370 [2024-04-17 10:29:50.407336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.370 qpair failed and we were unable to recover it. 00:33:17.370 [2024-04-17 10:29:50.407495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.407624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.407634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.371 qpair failed and we were unable to recover it. 00:33:17.371 [2024-04-17 10:29:50.407744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.407847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.407859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.371 qpair failed and we were unable to recover it. 00:33:17.371 [2024-04-17 10:29:50.407957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.408065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.408077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.371 qpair failed and we were unable to recover it. 00:33:17.371 [2024-04-17 10:29:50.408317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.408553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.408585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.371 qpair failed and we were unable to recover it. 00:33:17.371 [2024-04-17 10:29:50.408822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.408975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.409008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.371 qpair failed and we were unable to recover it. 00:33:17.371 [2024-04-17 10:29:50.409310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.409463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.409473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.371 qpair failed and we were unable to recover it. 00:33:17.371 [2024-04-17 10:29:50.409638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.409812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.409843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.371 qpair failed and we were unable to recover it. 00:33:17.371 [2024-04-17 10:29:50.410144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.410275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.410305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.371 qpair failed and we were unable to recover it. 00:33:17.371 [2024-04-17 10:29:50.410455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.410756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.410790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.371 qpair failed and we were unable to recover it. 00:33:17.371 [2024-04-17 10:29:50.411006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.411159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.411188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.371 qpair failed and we were unable to recover it. 00:33:17.371 [2024-04-17 10:29:50.411404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.411592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.411603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.371 qpair failed and we were unable to recover it. 00:33:17.371 [2024-04-17 10:29:50.411775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.411880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.411890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.371 qpair failed and we were unable to recover it. 00:33:17.371 [2024-04-17 10:29:50.412009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.412115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.412126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.371 qpair failed and we were unable to recover it. 00:33:17.371 [2024-04-17 10:29:50.412235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.412346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.412356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.371 qpair failed and we were unable to recover it. 00:33:17.371 [2024-04-17 10:29:50.412528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.412758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.412770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.371 qpair failed and we were unable to recover it. 00:33:17.371 [2024-04-17 10:29:50.412928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.413025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.413035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.371 qpair failed and we were unable to recover it. 00:33:17.371 [2024-04-17 10:29:50.413207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.413350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.413380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.371 qpair failed and we were unable to recover it. 00:33:17.371 [2024-04-17 10:29:50.413558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.413763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.413775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.371 qpair failed and we were unable to recover it. 00:33:17.371 [2024-04-17 10:29:50.413972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.414132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.414161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.371 qpair failed and we were unable to recover it. 00:33:17.371 [2024-04-17 10:29:50.414368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.414470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.414480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.371 qpair failed and we were unable to recover it. 00:33:17.371 [2024-04-17 10:29:50.414638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.414880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.414914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.371 qpair failed and we were unable to recover it. 00:33:17.371 [2024-04-17 10:29:50.415076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.415215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.415245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.371 qpair failed and we were unable to recover it. 00:33:17.371 [2024-04-17 10:29:50.415434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.415551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.415582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.371 qpair failed and we were unable to recover it. 00:33:17.371 [2024-04-17 10:29:50.415815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.416074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.416108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.371 qpair failed and we were unable to recover it. 00:33:17.371 [2024-04-17 10:29:50.416357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.416563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.416573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.371 qpair failed and we were unable to recover it. 00:33:17.371 [2024-04-17 10:29:50.416833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.417008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.417020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.371 qpair failed and we were unable to recover it. 00:33:17.371 [2024-04-17 10:29:50.417206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.417385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.417396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.371 qpair failed and we were unable to recover it. 00:33:17.371 [2024-04-17 10:29:50.417490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.371 [2024-04-17 10:29:50.417660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.417673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.417948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.418165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.418177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.418276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.418374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.418385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.418560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.418692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.418725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.418893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.419026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.419056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.419339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.419492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.419503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.419688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.419829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.419859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.420057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.420281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.420314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.420470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.420621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.420661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.420942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.421156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.421187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.421461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.421555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.421565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.421773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.421876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.421886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.422004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.422198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.422209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.422309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.422472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.422484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.422598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.422707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.422720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.422900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.423113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.423143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.423387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.423600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.423634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.423866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.424082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.424092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.424208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.424424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.424454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.424600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.424845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.424855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.425017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.425116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.425126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.425300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.425530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.425540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.425712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.425809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.425820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.425987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.426166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.426176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.426346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.426450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.426460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.426578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.426750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.426762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.426871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.426970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.426981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.427077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.427154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.427164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.372 qpair failed and we were unable to recover it. 00:33:17.372 [2024-04-17 10:29:50.427284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.372 [2024-04-17 10:29:50.427387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.427397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.373 qpair failed and we were unable to recover it. 00:33:17.373 [2024-04-17 10:29:50.427508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.427780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.427818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.373 qpair failed and we were unable to recover it. 00:33:17.373 [2024-04-17 10:29:50.428043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.428194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.428224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.373 qpair failed and we were unable to recover it. 00:33:17.373 [2024-04-17 10:29:50.428375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.428522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.428552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.373 qpair failed and we were unable to recover it. 00:33:17.373 [2024-04-17 10:29:50.428710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.428886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.428898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.373 qpair failed and we were unable to recover it. 00:33:17.373 [2024-04-17 10:29:50.429015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.429184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.429194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.373 qpair failed and we were unable to recover it. 00:33:17.373 [2024-04-17 10:29:50.429359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.429575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.429605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.373 qpair failed and we were unable to recover it. 00:33:17.373 [2024-04-17 10:29:50.429832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.430036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.430069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.373 qpair failed and we were unable to recover it. 00:33:17.373 [2024-04-17 10:29:50.430293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.430506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.430535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.373 qpair failed and we were unable to recover it. 00:33:17.373 [2024-04-17 10:29:50.430750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.430983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.431012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.373 qpair failed and we were unable to recover it. 00:33:17.373 [2024-04-17 10:29:50.431309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.431557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.431567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.373 qpair failed and we were unable to recover it. 00:33:17.373 [2024-04-17 10:29:50.431679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.431778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.431788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.373 qpair failed and we were unable to recover it. 00:33:17.373 [2024-04-17 10:29:50.431897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.432158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.432172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.373 qpair failed and we were unable to recover it. 00:33:17.373 [2024-04-17 10:29:50.432267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.432367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.432377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.373 qpair failed and we were unable to recover it. 00:33:17.373 [2024-04-17 10:29:50.432491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.432658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.432672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.373 qpair failed and we were unable to recover it. 00:33:17.373 [2024-04-17 10:29:50.432784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.432907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.432917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.373 qpair failed and we were unable to recover it. 00:33:17.373 [2024-04-17 10:29:50.433096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.433304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.433336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.373 qpair failed and we were unable to recover it. 00:33:17.373 [2024-04-17 10:29:50.433543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.433689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.433701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.373 qpair failed and we were unable to recover it. 00:33:17.373 [2024-04-17 10:29:50.433801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.433924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.433954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.373 qpair failed and we were unable to recover it. 00:33:17.373 [2024-04-17 10:29:50.434090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.434307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.373 [2024-04-17 10:29:50.434346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.373 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.434568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.434785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.434816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.374 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.434990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.435143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.435173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.374 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.435321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.435542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.435575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.374 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.435796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.435958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.435968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.374 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.436080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.436172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.436184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.374 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.436347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.436523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.436537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.374 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.436638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.436770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.436781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.374 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.436967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.437128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.437139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.374 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.437387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.437625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.437636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.374 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.437817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.437997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.438008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.374 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.438285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.438419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.438429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.374 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.438623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.438790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.438825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.374 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.439069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.439225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.439254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.374 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.439461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.439687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.439719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.374 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.439925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.440099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.440112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.374 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.440345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.440577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.440587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.374 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.440692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.440863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.440873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.374 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.440978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.441258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.441269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.374 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.441390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.441627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.441638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.374 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.441753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.442012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.442051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.374 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.442372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.442528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.442560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.374 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.442777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.442900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.442931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.374 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.443054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.443293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.443327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.374 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.443467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.443698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.443730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.374 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.443930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.444210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.444257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.374 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.444480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.444710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.444720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.374 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.444840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.445108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.445138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.374 qpair failed and we were unable to recover it. 00:33:17.374 [2024-04-17 10:29:50.445385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.374 [2024-04-17 10:29:50.445634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.445649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.375 qpair failed and we were unable to recover it. 00:33:17.375 [2024-04-17 10:29:50.445827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.445957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.445967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.375 qpair failed and we were unable to recover it. 00:33:17.375 [2024-04-17 10:29:50.446063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.446263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.446274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.375 qpair failed and we were unable to recover it. 00:33:17.375 [2024-04-17 10:29:50.446453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.446660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.446672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.375 qpair failed and we were unable to recover it. 00:33:17.375 [2024-04-17 10:29:50.446851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.447050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.447060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.375 qpair failed and we were unable to recover it. 00:33:17.375 [2024-04-17 10:29:50.447161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.447335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.447345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.375 qpair failed and we were unable to recover it. 00:33:17.375 [2024-04-17 10:29:50.447434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.447603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.447614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.375 qpair failed and we were unable to recover it. 00:33:17.375 [2024-04-17 10:29:50.447723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.447825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.447835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.375 qpair failed and we were unable to recover it. 00:33:17.375 [2024-04-17 10:29:50.448013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.448119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.448129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.375 qpair failed and we were unable to recover it. 00:33:17.375 [2024-04-17 10:29:50.448237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.448344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.448354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.375 qpair failed and we were unable to recover it. 00:33:17.375 [2024-04-17 10:29:50.448538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.448714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.448726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.375 qpair failed and we were unable to recover it. 00:33:17.375 [2024-04-17 10:29:50.448826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.448953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.448982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.375 qpair failed and we were unable to recover it. 00:33:17.375 [2024-04-17 10:29:50.449119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.449427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.449438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.375 qpair failed and we were unable to recover it. 00:33:17.375 [2024-04-17 10:29:50.449550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.449686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.449697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.375 qpair failed and we were unable to recover it. 00:33:17.375 [2024-04-17 10:29:50.449870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.450057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.450087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.375 qpair failed and we were unable to recover it. 00:33:17.375 [2024-04-17 10:29:50.450237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.450438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.450468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.375 qpair failed and we were unable to recover it. 00:33:17.375 [2024-04-17 10:29:50.450706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.450875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.450908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.375 qpair failed and we were unable to recover it. 00:33:17.375 [2024-04-17 10:29:50.451208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.451344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.451374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.375 qpair failed and we were unable to recover it. 00:33:17.375 [2024-04-17 10:29:50.451665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.451779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.451789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.375 qpair failed and we were unable to recover it. 00:33:17.375 [2024-04-17 10:29:50.451977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.452119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.452149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.375 qpair failed and we were unable to recover it. 00:33:17.375 [2024-04-17 10:29:50.452363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.452581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.452611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.375 qpair failed and we were unable to recover it. 00:33:17.375 [2024-04-17 10:29:50.452858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.453024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.453058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.375 qpair failed and we were unable to recover it. 00:33:17.375 [2024-04-17 10:29:50.453380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.453530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.453560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.375 qpair failed and we were unable to recover it. 00:33:17.375 [2024-04-17 10:29:50.453722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.453827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.453838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.375 qpair failed and we were unable to recover it. 00:33:17.375 [2024-04-17 10:29:50.454002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.454122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.454133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.375 qpair failed and we were unable to recover it. 00:33:17.375 [2024-04-17 10:29:50.454400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.454563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.454572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.375 qpair failed and we were unable to recover it. 00:33:17.375 [2024-04-17 10:29:50.454737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.454832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.375 [2024-04-17 10:29:50.454842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.375 qpair failed and we were unable to recover it. 00:33:17.375 [2024-04-17 10:29:50.454950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.455079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.455093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.455278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.455379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.455388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.455549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.455727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.455738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.455852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.456025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.456035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.456239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.456402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.456413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.456525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.456700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.456710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.456898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.457133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.457142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.457318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.457442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.457453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.457568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.457765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.457776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.458028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.458235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.458265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.458531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.458750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.458781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.458934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.459139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.459149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.459274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.459377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.459387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.459524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.459740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.459773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.459915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.460209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.460240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.460390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.460667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.460701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.461008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.461172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.461182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.461355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.461608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.461619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.461795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.461896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.461907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.462005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.462232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.462242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.462417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.462543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.462553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.462662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.462936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.462947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.463070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.463168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.463179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.463358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.463467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.463477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.463612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.463843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.463855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.464023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.464253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.464263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.464554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.464700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.464732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.464951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.465109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.465139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.376 qpair failed and we were unable to recover it. 00:33:17.376 [2024-04-17 10:29:50.465405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.376 [2024-04-17 10:29:50.465624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.465672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.377 qpair failed and we were unable to recover it. 00:33:17.377 [2024-04-17 10:29:50.465919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.466025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.466037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.377 qpair failed and we were unable to recover it. 00:33:17.377 [2024-04-17 10:29:50.466232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.466407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.466417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.377 qpair failed and we were unable to recover it. 00:33:17.377 [2024-04-17 10:29:50.466592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.466753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.466765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.377 qpair failed and we were unable to recover it. 00:33:17.377 [2024-04-17 10:29:50.466995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.467109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.467120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.377 qpair failed and we were unable to recover it. 00:33:17.377 [2024-04-17 10:29:50.467223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.467453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.467463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.377 qpair failed and we were unable to recover it. 00:33:17.377 [2024-04-17 10:29:50.467633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.467756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.467766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.377 qpair failed and we were unable to recover it. 00:33:17.377 [2024-04-17 10:29:50.467930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.468043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.468054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.377 qpair failed and we were unable to recover it. 00:33:17.377 [2024-04-17 10:29:50.468179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.468354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.468364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.377 qpair failed and we were unable to recover it. 00:33:17.377 [2024-04-17 10:29:50.468528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.468724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.468735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.377 qpair failed and we were unable to recover it. 00:33:17.377 [2024-04-17 10:29:50.468837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.469004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.469014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.377 qpair failed and we were unable to recover it. 00:33:17.377 [2024-04-17 10:29:50.469116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.469274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.469285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.377 qpair failed and we were unable to recover it. 00:33:17.377 [2024-04-17 10:29:50.469459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.469584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.469594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.377 qpair failed and we were unable to recover it. 00:33:17.377 [2024-04-17 10:29:50.469763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.469963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.469993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.377 qpair failed and we were unable to recover it. 00:33:17.377 [2024-04-17 10:29:50.470155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.470413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.470448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.377 qpair failed and we were unable to recover it. 00:33:17.377 [2024-04-17 10:29:50.470711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.470864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.470876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.377 qpair failed and we were unable to recover it. 00:33:17.377 [2024-04-17 10:29:50.471062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.471255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.471284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.377 qpair failed and we were unable to recover it. 00:33:17.377 [2024-04-17 10:29:50.471562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.471830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.471841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.377 qpair failed and we were unable to recover it. 00:33:17.377 [2024-04-17 10:29:50.472041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.472213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.472223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.377 qpair failed and we were unable to recover it. 00:33:17.377 [2024-04-17 10:29:50.472362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.472469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.472479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.377 qpair failed and we were unable to recover it. 00:33:17.377 [2024-04-17 10:29:50.472601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.472705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.472717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.377 qpair failed and we were unable to recover it. 00:33:17.377 [2024-04-17 10:29:50.472975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.473172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.473183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.377 qpair failed and we were unable to recover it. 00:33:17.377 [2024-04-17 10:29:50.473344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.473539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.473550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.377 qpair failed and we were unable to recover it. 00:33:17.377 [2024-04-17 10:29:50.473754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.473853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.473864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.377 qpair failed and we were unable to recover it. 00:33:17.377 [2024-04-17 10:29:50.473977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.474205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.474215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.377 qpair failed and we were unable to recover it. 00:33:17.377 [2024-04-17 10:29:50.474336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.474499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.474510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.377 qpair failed and we were unable to recover it. 00:33:17.377 [2024-04-17 10:29:50.474798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.475047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.377 [2024-04-17 10:29:50.475079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.378 qpair failed and we were unable to recover it. 00:33:17.378 [2024-04-17 10:29:50.475310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.475573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.475583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.378 qpair failed and we were unable to recover it. 00:33:17.378 [2024-04-17 10:29:50.475766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.475881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.475892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.378 qpair failed and we were unable to recover it. 00:33:17.378 [2024-04-17 10:29:50.476067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.476214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.476243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.378 qpair failed and we were unable to recover it. 00:33:17.378 [2024-04-17 10:29:50.476452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.476664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.476697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.378 qpair failed and we were unable to recover it. 00:33:17.378 [2024-04-17 10:29:50.476822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.477069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.477080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.378 qpair failed and we were unable to recover it. 00:33:17.378 [2024-04-17 10:29:50.477202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.477377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.477387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.378 qpair failed and we were unable to recover it. 00:33:17.378 [2024-04-17 10:29:50.477562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.477743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.477754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.378 qpair failed and we were unable to recover it. 00:33:17.378 [2024-04-17 10:29:50.477862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.478025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.478036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.378 qpair failed and we were unable to recover it. 00:33:17.378 [2024-04-17 10:29:50.478200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.478370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.478381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.378 qpair failed and we were unable to recover it. 00:33:17.378 [2024-04-17 10:29:50.478553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.478731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.478742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.378 qpair failed and we were unable to recover it. 00:33:17.378 [2024-04-17 10:29:50.478914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.479080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.479091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.378 qpair failed and we were unable to recover it. 00:33:17.378 [2024-04-17 10:29:50.479192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.479388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.479418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.378 qpair failed and we were unable to recover it. 00:33:17.378 [2024-04-17 10:29:50.479657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.479889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.479920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.378 qpair failed and we were unable to recover it. 00:33:17.378 [2024-04-17 10:29:50.480157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.480393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.480430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.378 qpair failed and we were unable to recover it. 00:33:17.378 [2024-04-17 10:29:50.480696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.480886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.480897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.378 qpair failed and we were unable to recover it. 00:33:17.378 [2024-04-17 10:29:50.481131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.481373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.481416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.378 qpair failed and we were unable to recover it. 00:33:17.378 [2024-04-17 10:29:50.481738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.481909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.481939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.378 qpair failed and we were unable to recover it. 00:33:17.378 [2024-04-17 10:29:50.482235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.482525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.482559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.378 qpair failed and we were unable to recover it. 00:33:17.378 [2024-04-17 10:29:50.482772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.482982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.482992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.378 qpair failed and we were unable to recover it. 00:33:17.378 [2024-04-17 10:29:50.483105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.483211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.483225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.378 qpair failed and we were unable to recover it. 00:33:17.378 [2024-04-17 10:29:50.483373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.483601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.483612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.378 qpair failed and we were unable to recover it. 00:33:17.378 [2024-04-17 10:29:50.483794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.483954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.483964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.378 qpair failed and we were unable to recover it. 00:33:17.378 [2024-04-17 10:29:50.484071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.484259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.484269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.378 qpair failed and we were unable to recover it. 00:33:17.378 [2024-04-17 10:29:50.484382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.378 [2024-04-17 10:29:50.484501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.484515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.379 qpair failed and we were unable to recover it. 00:33:17.379 [2024-04-17 10:29:50.484624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.484806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.484818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.379 qpair failed and we were unable to recover it. 00:33:17.379 [2024-04-17 10:29:50.484942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.485052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.485063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.379 qpair failed and we were unable to recover it. 00:33:17.379 [2024-04-17 10:29:50.485251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.485465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.485495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.379 qpair failed and we were unable to recover it. 00:33:17.379 [2024-04-17 10:29:50.485706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.485879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.485912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.379 qpair failed and we were unable to recover it. 00:33:17.379 [2024-04-17 10:29:50.486134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.486313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.486342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.379 qpair failed and we were unable to recover it. 00:33:17.379 [2024-04-17 10:29:50.486525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.486739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.486775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.379 qpair failed and we were unable to recover it. 00:33:17.379 [2024-04-17 10:29:50.487068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.487213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.487224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.379 qpair failed and we were unable to recover it. 00:33:17.379 [2024-04-17 10:29:50.487353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.487473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.487483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.379 qpair failed and we were unable to recover it. 00:33:17.379 [2024-04-17 10:29:50.487714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.487828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.487839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.379 qpair failed and we were unable to recover it. 00:33:17.379 [2024-04-17 10:29:50.487936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.488093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.488103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.379 qpair failed and we were unable to recover it. 00:33:17.379 [2024-04-17 10:29:50.488266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.488430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.488440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.379 qpair failed and we were unable to recover it. 00:33:17.379 [2024-04-17 10:29:50.488632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.488833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.488874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.379 qpair failed and we were unable to recover it. 00:33:17.379 [2024-04-17 10:29:50.489067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.489288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.489325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.379 qpair failed and we were unable to recover it. 00:33:17.379 [2024-04-17 10:29:50.489539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.489820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.489854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.379 qpair failed and we were unable to recover it. 00:33:17.379 [2024-04-17 10:29:50.490098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.490287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.490317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.379 qpair failed and we were unable to recover it. 00:33:17.379 [2024-04-17 10:29:50.490537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.490698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.490730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.379 qpair failed and we were unable to recover it. 00:33:17.379 [2024-04-17 10:29:50.491002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.491214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.491225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.379 qpair failed and we were unable to recover it. 00:33:17.379 [2024-04-17 10:29:50.491398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.491576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.491586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.379 qpair failed and we were unable to recover it. 00:33:17.379 [2024-04-17 10:29:50.491698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.491883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.491893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.379 qpair failed and we were unable to recover it. 00:33:17.379 [2024-04-17 10:29:50.492082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.492186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.492196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.379 qpair failed and we were unable to recover it. 00:33:17.379 [2024-04-17 10:29:50.492362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.492557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.492567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.379 qpair failed and we were unable to recover it. 00:33:17.379 [2024-04-17 10:29:50.492687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.492800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.492810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.379 qpair failed and we were unable to recover it. 00:33:17.379 [2024-04-17 10:29:50.492952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.493218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.493233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.379 qpair failed and we were unable to recover it. 00:33:17.379 [2024-04-17 10:29:50.493397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.493640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.493656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.379 qpair failed and we were unable to recover it. 00:33:17.379 [2024-04-17 10:29:50.493763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.493869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.493879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.379 qpair failed and we were unable to recover it. 00:33:17.379 [2024-04-17 10:29:50.493989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.494082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.494092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.379 qpair failed and we were unable to recover it. 00:33:17.379 [2024-04-17 10:29:50.494201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.379 [2024-04-17 10:29:50.494382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.494420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.494589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.494820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.494853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.495058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.495184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.495194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.495428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.495626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.495638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.495815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.495990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.496000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.496109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.496214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.496224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.496364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.496523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.496535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.496767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.496947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.496958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.497127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.497299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.497329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.497537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.497803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.497814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.497993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.498189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.498199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.498383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.498494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.498504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.498608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.498775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.498787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.498883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.499049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.499078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.499404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.499691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.499723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.499882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.500096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.500126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.500403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.500679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.500693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.500874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.501142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.501153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.501324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.501495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.501506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.501629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.501823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.501833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.502015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.502245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.502276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.502550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.502700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.502712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.502904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.503112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.503152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.503306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.503467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.503497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.503640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.503926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.503956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.504259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.504407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.504438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.504573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.504743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.504755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.504868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.504976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.504986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.380 qpair failed and we were unable to recover it. 00:33:17.380 [2024-04-17 10:29:50.505164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.380 [2024-04-17 10:29:50.505326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.505337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.381 qpair failed and we were unable to recover it. 00:33:17.381 [2024-04-17 10:29:50.505459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.505561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.505570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.381 qpair failed and we were unable to recover it. 00:33:17.381 [2024-04-17 10:29:50.505680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.505867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.505877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.381 qpair failed and we were unable to recover it. 00:33:17.381 [2024-04-17 10:29:50.506057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.506195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.506205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.381 qpair failed and we were unable to recover it. 00:33:17.381 [2024-04-17 10:29:50.506405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.506624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.506635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.381 qpair failed and we were unable to recover it. 00:33:17.381 [2024-04-17 10:29:50.506874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.506993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.507004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.381 qpair failed and we were unable to recover it. 00:33:17.381 [2024-04-17 10:29:50.507165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.507338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.507348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.381 qpair failed and we were unable to recover it. 00:33:17.381 [2024-04-17 10:29:50.507517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.507766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.507777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.381 qpair failed and we were unable to recover it. 00:33:17.381 [2024-04-17 10:29:50.507974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.508080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.508090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.381 qpair failed and we were unable to recover it. 00:33:17.381 [2024-04-17 10:29:50.508261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.508438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.508448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.381 qpair failed and we were unable to recover it. 00:33:17.381 [2024-04-17 10:29:50.508663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.508836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.508847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.381 qpair failed and we were unable to recover it. 00:33:17.381 [2024-04-17 10:29:50.509014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.509166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.509197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.381 qpair failed and we were unable to recover it. 00:33:17.381 [2024-04-17 10:29:50.509383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.509551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.509581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.381 qpair failed and we were unable to recover it. 00:33:17.381 [2024-04-17 10:29:50.509846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.510104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.510114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.381 qpair failed and we were unable to recover it. 00:33:17.381 [2024-04-17 10:29:50.510318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.510429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.510458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.381 qpair failed and we were unable to recover it. 00:33:17.381 [2024-04-17 10:29:50.510625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.510875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.510909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.381 qpair failed and we were unable to recover it. 00:33:17.381 [2024-04-17 10:29:50.511048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.511214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.511244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.381 qpair failed and we were unable to recover it. 00:33:17.381 [2024-04-17 10:29:50.511401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.511661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.511697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.381 qpair failed and we were unable to recover it. 00:33:17.381 [2024-04-17 10:29:50.511952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.512122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.512132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.381 qpair failed and we were unable to recover it. 00:33:17.381 [2024-04-17 10:29:50.512240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.512340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.512360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.381 qpair failed and we were unable to recover it. 00:33:17.381 [2024-04-17 10:29:50.512535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.512711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.512721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.381 qpair failed and we were unable to recover it. 00:33:17.381 [2024-04-17 10:29:50.512895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.513084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.513095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.381 qpair failed and we were unable to recover it. 00:33:17.381 [2024-04-17 10:29:50.513258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.513376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.513407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.381 qpair failed and we were unable to recover it. 00:33:17.381 [2024-04-17 10:29:50.513568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.513756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.513788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.381 qpair failed and we were unable to recover it. 00:33:17.381 [2024-04-17 10:29:50.514032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.514330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.514365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.381 qpair failed and we were unable to recover it. 00:33:17.381 [2024-04-17 10:29:50.514537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.514697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.514730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.381 qpair failed and we were unable to recover it. 00:33:17.381 [2024-04-17 10:29:50.514941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.515150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.381 [2024-04-17 10:29:50.515189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.382 qpair failed and we were unable to recover it. 00:33:17.382 [2024-04-17 10:29:50.515491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.515698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.515729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.382 qpair failed and we were unable to recover it. 00:33:17.382 [2024-04-17 10:29:50.515957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.516112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.516143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.382 qpair failed and we were unable to recover it. 00:33:17.382 [2024-04-17 10:29:50.516423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.516634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.516674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.382 qpair failed and we were unable to recover it. 00:33:17.382 [2024-04-17 10:29:50.516895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.517098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.517132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.382 qpair failed and we were unable to recover it. 00:33:17.382 [2024-04-17 10:29:50.517275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.517585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.517614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.382 qpair failed and we were unable to recover it. 00:33:17.382 [2024-04-17 10:29:50.517838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.518002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.518012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.382 qpair failed and we were unable to recover it. 00:33:17.382 [2024-04-17 10:29:50.518246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.518373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.518384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.382 qpair failed and we were unable to recover it. 00:33:17.382 [2024-04-17 10:29:50.518591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.518755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.518767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.382 qpair failed and we were unable to recover it. 00:33:17.382 [2024-04-17 10:29:50.518887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.519056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.519066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.382 qpair failed and we were unable to recover it. 00:33:17.382 [2024-04-17 10:29:50.519364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.519544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.519554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.382 qpair failed and we were unable to recover it. 00:33:17.382 [2024-04-17 10:29:50.519723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.519839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.519850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.382 qpair failed and we were unable to recover it. 00:33:17.382 [2024-04-17 10:29:50.519945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.520027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.520037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.382 qpair failed and we were unable to recover it. 00:33:17.382 [2024-04-17 10:29:50.520226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.520438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.520448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.382 qpair failed and we were unable to recover it. 00:33:17.382 [2024-04-17 10:29:50.520641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.520824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.520835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.382 qpair failed and we were unable to recover it. 00:33:17.382 [2024-04-17 10:29:50.520958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.521048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.521059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.382 qpair failed and we were unable to recover it. 00:33:17.382 [2024-04-17 10:29:50.521319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.521439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.521469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.382 qpair failed and we were unable to recover it. 00:33:17.382 [2024-04-17 10:29:50.521675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.521923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.521953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.382 qpair failed and we were unable to recover it. 00:33:17.382 [2024-04-17 10:29:50.522175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.522340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.522370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.382 qpair failed and we were unable to recover it. 00:33:17.382 [2024-04-17 10:29:50.522700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.522856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.522886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.382 qpair failed and we were unable to recover it. 00:33:17.382 [2024-04-17 10:29:50.523097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.523348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.523384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.382 qpair failed and we were unable to recover it. 00:33:17.382 [2024-04-17 10:29:50.523778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.524008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.524049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.382 qpair failed and we were unable to recover it. 00:33:17.382 [2024-04-17 10:29:50.524281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.524540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.524550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.382 qpair failed and we were unable to recover it. 00:33:17.382 [2024-04-17 10:29:50.524812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.524956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.524966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.382 qpair failed and we were unable to recover it. 00:33:17.382 [2024-04-17 10:29:50.525092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.525190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.525212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.382 qpair failed and we were unable to recover it. 00:33:17.382 [2024-04-17 10:29:50.525389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.525498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.525508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.382 qpair failed and we were unable to recover it. 00:33:17.382 [2024-04-17 10:29:50.525694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.525855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.382 [2024-04-17 10:29:50.525867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.526062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.526172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.526182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.526396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.526566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.526596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.526965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.527284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.527321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.527604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.527914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.527946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.528087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.528322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.528333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.528527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.528825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.528857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.529167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.529271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.529281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.529518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.529748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.529758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.529923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.530034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.530045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.530152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.530353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.530363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.530447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.530617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.530628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.530746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.530922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.530932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.531040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.531239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.531269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.531517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.531706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.531736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.531961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.532109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.532139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.532366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.532510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.532540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.532700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.532869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.532879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.533135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.533345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.533375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.533669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.533983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.533994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.534175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.534291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.534301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.534472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.534582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.534592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.534708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.534826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.534837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.534999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.535263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.535292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.535442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.535579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.535609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.535839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.536121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.536151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.536379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.536531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.536562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.536854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.536976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.537019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.537271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.537425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.537454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.537694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.537981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.383 [2024-04-17 10:29:50.537991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.383 qpair failed and we were unable to recover it. 00:33:17.383 [2024-04-17 10:29:50.538241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.538479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.538489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.538652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.538829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.538839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.539013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.539175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.539186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.539359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.539466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.539476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.539675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.539853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.539864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.540095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.540268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.540280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.540463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.540711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.540743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.540907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.541071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.541102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.541337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.541571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.541604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.541856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.542009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.542040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.542206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.542444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.542455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.542727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.542969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.542981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.543081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.543335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.543346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.543439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.543561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.543572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.543745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.544002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.544012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.544183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.544414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.544424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.544539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.544704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.544716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.544878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.545068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.545078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.545264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.545496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.545526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.545702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.545992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.546025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.546301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.546471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.546481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.546661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.546826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.546838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.547055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.547328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.547358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.547601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.547825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.547868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.548068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.548179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.548211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.548365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.548657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.548689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.548836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.549076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.549086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.549191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.549308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.549318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.549426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.549632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.549642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.549879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.550001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.550011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.550171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.550275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.550286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.384 qpair failed and we were unable to recover it. 00:33:17.384 [2024-04-17 10:29:50.550458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.384 [2024-04-17 10:29:50.550567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.550578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.550691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.550928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.550958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.551203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.551429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.551459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.551691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.551935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.551965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.552167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.552327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.552360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.552548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.552850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.552883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.553086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.553295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.553338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.553524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.553780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.553812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.554022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.554184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.554215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.554516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.554726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.554758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.554938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.555242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.555252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.555426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.555546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.555558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.555658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.555816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.555826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.555920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.556089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.556099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.556221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.556409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.556419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.556514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.556702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.556714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.556822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.557022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.557035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.558543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.558830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.558875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.559145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.559443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.559474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.559627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.559877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.559919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.560137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.560248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.560258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.560364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.560467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.560477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.560574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.560805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.560816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.560919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.561099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.561111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.561218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.561338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.561348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.561549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.561741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.561753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.561868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.561965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.561978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.562111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.562352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.562382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.562719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.562946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.562956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.563149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.563350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.563361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.385 qpair failed and we were unable to recover it. 00:33:17.385 [2024-04-17 10:29:50.563590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.385 [2024-04-17 10:29:50.563763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.563775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.563953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.564132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.564142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.564254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.564455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.564465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.564579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.564672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.564683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.564795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.565001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.565012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.565274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.565450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.565462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.565560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.565665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.565678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.565802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.565900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.565910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.566100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.566196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.566206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.566370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.566535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.566546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.566654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.566815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.566825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.566937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.567186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.567196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.567383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.567490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.567503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.567686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.567808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.567818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.567919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.568053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.568084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.568219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.568366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.568396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.568610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.568701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.568712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.568827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.568964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.568993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.569144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.569288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.569318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.569465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.569689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.569733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.569881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.570029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.570059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.570238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.570343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.570354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.570467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.570562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.570571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.570666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.570846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.570858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.570976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.571139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.571150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.571350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.571453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.571463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.571642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.571756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.571787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.572018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.572161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.572191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.572396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.572691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.572724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.572865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.573016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.573051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.573284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.573486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.573516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.386 qpair failed and we were unable to recover it. 00:33:17.386 [2024-04-17 10:29:50.573750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.386 [2024-04-17 10:29:50.574019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.574029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.387 qpair failed and we were unable to recover it. 00:33:17.387 [2024-04-17 10:29:50.574284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.574401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.574411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.387 qpair failed and we were unable to recover it. 00:33:17.387 [2024-04-17 10:29:50.574505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.574685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.574696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.387 qpair failed and we were unable to recover it. 00:33:17.387 [2024-04-17 10:29:50.574900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.575010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.575020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.387 qpair failed and we were unable to recover it. 00:33:17.387 [2024-04-17 10:29:50.575251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.575361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.575371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.387 qpair failed and we were unable to recover it. 00:33:17.387 [2024-04-17 10:29:50.575541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.575723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.575734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.387 qpair failed and we were unable to recover it. 00:33:17.387 [2024-04-17 10:29:50.575833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.575994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.576034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.387 qpair failed and we were unable to recover it. 00:33:17.387 [2024-04-17 10:29:50.576254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.576412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.576443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.387 qpair failed and we were unable to recover it. 00:33:17.387 [2024-04-17 10:29:50.576757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.576968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.576979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.387 qpair failed and we were unable to recover it. 00:33:17.387 [2024-04-17 10:29:50.577157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.577287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.577327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.387 qpair failed and we were unable to recover it. 00:33:17.387 [2024-04-17 10:29:50.577507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.577662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.577693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.387 qpair failed and we were unable to recover it. 00:33:17.387 [2024-04-17 10:29:50.577905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.578063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.578093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.387 qpair failed and we were unable to recover it. 00:33:17.387 [2024-04-17 10:29:50.578243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.578450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.578479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.387 qpair failed and we were unable to recover it. 00:33:17.387 [2024-04-17 10:29:50.578680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.578896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.578927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.387 qpair failed and we were unable to recover it. 00:33:17.387 [2024-04-17 10:29:50.579127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.579244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.579255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.387 qpair failed and we were unable to recover it. 00:33:17.387 [2024-04-17 10:29:50.579352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.579530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.579542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.387 qpair failed and we were unable to recover it. 00:33:17.387 [2024-04-17 10:29:50.579641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.579766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.579776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.387 qpair failed and we were unable to recover it. 00:33:17.387 [2024-04-17 10:29:50.579952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.580047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.580056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.387 qpair failed and we were unable to recover it. 00:33:17.387 [2024-04-17 10:29:50.580229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.580387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.580397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.387 qpair failed and we were unable to recover it. 00:33:17.387 [2024-04-17 10:29:50.580490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.580738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.580750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.387 qpair failed and we were unable to recover it. 00:33:17.387 [2024-04-17 10:29:50.580909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.581020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.581030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.387 qpair failed and we were unable to recover it. 00:33:17.387 [2024-04-17 10:29:50.581288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.581388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.581399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.387 qpair failed and we were unable to recover it. 00:33:17.387 [2024-04-17 10:29:50.581487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.581649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.581663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.387 qpair failed and we were unable to recover it. 00:33:17.387 [2024-04-17 10:29:50.581838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.581946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.581956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.387 qpair failed and we were unable to recover it. 00:33:17.387 [2024-04-17 10:29:50.582195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.582342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.582373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.387 qpair failed and we were unable to recover it. 00:33:17.387 [2024-04-17 10:29:50.582684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.582966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.582977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.387 qpair failed and we were unable to recover it. 00:33:17.387 [2024-04-17 10:29:50.583141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.583404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.387 [2024-04-17 10:29:50.583433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.387 qpair failed and we were unable to recover it. 00:33:17.387 [2024-04-17 10:29:50.583663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.583873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.583907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.584076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.584208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.584239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.584404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.584611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.584642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.584790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.584933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.584985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.585249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.585412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.585422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.585541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.585638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.585654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.585846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.586048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.586058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.586238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.586328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.586338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.586510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.586616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.586626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.586818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.586910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.586921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.587029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.587147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.587159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.587321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.587520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.587530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.587633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.587754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.587765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.587931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.588069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.588099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.588235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.588384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.588413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.588583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.588717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.588750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.588965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.589172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.589183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.589329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.589529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.589559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.589865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.590052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.590062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.590169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.590346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.590360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.590468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.590647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.590658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.590838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.591042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.591053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.591154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.591313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.591323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.591442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.591699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.591710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.591822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.591991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.592001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.592116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.592242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.592252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.592450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.592614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.592625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.592724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.592839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.592849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.592999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.593192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.593223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.593488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.593696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.593731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.593972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.594125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.594155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.594373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.594581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.594612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.594934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.595141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.595152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.388 qpair failed and we were unable to recover it. 00:33:17.388 [2024-04-17 10:29:50.595311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.595515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.388 [2024-04-17 10:29:50.595525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.595685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.595867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.595901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.596060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.596193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.596223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.596361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.596528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.596537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.596711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.596954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.596994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.597172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.597339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.597368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.597619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.597943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.597985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.598210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.598527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.598556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.598835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.598988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.599018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.599129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.599314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.599323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.599436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.599530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.599540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.599703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.599812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.599823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.599997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.600111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.600121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.600298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.600463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.600493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.600662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.600811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.600841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.600983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.601192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.601223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.601517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.601763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.601795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.602015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.602244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.602254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.602373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.602518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.602529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.602719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.602955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.602965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.603141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.603253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.603263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.603441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.603576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.603587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.603686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.603781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.603791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.603968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.604073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.604084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.604199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.604375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.604386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.604484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.604588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.604599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.604719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.604954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.604963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.605250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.605368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.605379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.605558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.605759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.605770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.605958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.606063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.606093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.606333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.606552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.606582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.606832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.607127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.607157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.389 qpair failed and we were unable to recover it. 00:33:17.389 [2024-04-17 10:29:50.607456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.389 [2024-04-17 10:29:50.607690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.607722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.607893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.608044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.608074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.608311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.608483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.608493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.608779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.608947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.608959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.609050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.609185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.609196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.609358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.609456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.609465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.609651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.609740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.609751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.609926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.610119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.610130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.610297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.610386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.610396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.610582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.610746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.610757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.610985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.611167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.611180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.611344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.611505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.611541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.611810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.611964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.611994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.612153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.612318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.612329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.612492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.612678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.612693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.612873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.612994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.613005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.613104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.613341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.613352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.613534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.613715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.613726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.613843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.613946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.613956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.614117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.614230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.614240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.614405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.614619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.614629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.614730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.614840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.614850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.614953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.615133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.615143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.615321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.615555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.615567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.615763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.615924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.615937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.616131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.616239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.616269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.616601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.616842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.616878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.617171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.617515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.617524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.617648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.617765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.617775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.617958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.618058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.618067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.618327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.618435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.618445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.618538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.618797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.618810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.618988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.619271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.619301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.619509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.619726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.619773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.620003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.620272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.620309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.620548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.620724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.620735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.390 qpair failed and we were unable to recover it. 00:33:17.390 [2024-04-17 10:29:50.620902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.390 [2024-04-17 10:29:50.621103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.621113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.621304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.621469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.621499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.621681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.621976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.622017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.622241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.622408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.622419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.622693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.622936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.622945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.623128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.623262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.623273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.623425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.623622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.623633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.623824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.624040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.624070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.624323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.624540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.624578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.624748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.625068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.625098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.625343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.625553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.625564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.625749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.625921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.625932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.626176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.626282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.626292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.626470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.626652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.626663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.626752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.626922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.626932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.627093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.627263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.627274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.627471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.627700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.627734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.627883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.628059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.628070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.628187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.628382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.628412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.628586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.628840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.628873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.629160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.629365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.629396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.629678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.629901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.629931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.630134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.630294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.630304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.630563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.630725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.630736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.630918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.631082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.631092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.631182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.631290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.631300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.631403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.631589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.631600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.631849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.632081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.632092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.391 qpair failed and we were unable to recover it. 00:33:17.391 [2024-04-17 10:29:50.632258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.632372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.391 [2024-04-17 10:29:50.632382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.632571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.632759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.632791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.632954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.633097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.633128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.633289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.633496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.633526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.633754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.633975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.633985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.634217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.634411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.634421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.634613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.634802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.634813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.634991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.635152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.635164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.635326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.635440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.635450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.635544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.635713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.635724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.635835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.636073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.636084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.636268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.636484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.636494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.636671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.636789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.636819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.636978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.637105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.637135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.637363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.637573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.637603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.637897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.638052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.638062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.638222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.638353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.638363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.638502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.638677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.638688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.638892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.639064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.639074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.639245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.639353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.639363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.639477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.639591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.639602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.639768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.639960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.639971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.640143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.640379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.640409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.640627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.640809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.640840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.641063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.641354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.641364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.641568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.641800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.641812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.641935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.642056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.642065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.642227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.642455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.642466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.642570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.642663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.642674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.642846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.642950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.642960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.643134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.643295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.643305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.643517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.643724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.643735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.643968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.644079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.644089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.644194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.644353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.644385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.644622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.644789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.644823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.645049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.645354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.645384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.392 qpair failed and we were unable to recover it. 00:33:17.392 [2024-04-17 10:29:50.645613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.645840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.392 [2024-04-17 10:29:50.645872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.646148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.646353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.646364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.646476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.646747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.646758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.647001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.647231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.647242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.647416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.647602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.647612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.647788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.647895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.647904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.648019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.648269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.648280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.648381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.648500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.648511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.648627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.648776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.648806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.648970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.649284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.649315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.649526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.649814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.649844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.650196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.650393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.650403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.650575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.650699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.650710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.650871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.651047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.651058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.651277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.651509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.651519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.651659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.651836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.651846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.651991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.652185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.652195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.652374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.652606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.652616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.652791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.653028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.653039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.653164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.653316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.653346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.653549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.653783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.653814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.653958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.654229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.654259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.654466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.654692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.654724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.655009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.655191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.655221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.655432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.655598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.655628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.655970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.656122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.656151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.656378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.656589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.656599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.656779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.656953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.656963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.657071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.657251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.657261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.657353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.657466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.657476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.657649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.657747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.657760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.657883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.658046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.658057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.658230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.658391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.658402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.658504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.658701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.658712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.658925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.659193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.393 [2024-04-17 10:29:50.659223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.393 qpair failed and we were unable to recover it. 00:33:17.393 [2024-04-17 10:29:50.659499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.659677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.659710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.659980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.660093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.660123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.660355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.660623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.660662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.660902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.661133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.661162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.661409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.661622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.661658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.661909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.662205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.662234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.662440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.662545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.662556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.662716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.662976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.662985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.663090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.663271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.663281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.663459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.663575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.663584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.663691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.663925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.663936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.664021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.664258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.664267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.664382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.664497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.664508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.664606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.664707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.664718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.664897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.665110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.665140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.665290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.665432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.665462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.665705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.665904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.665934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.666094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.666392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.666422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.666694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.666862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.666892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.667119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.667406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.667435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.667676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.667886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.667917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.668168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.668277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.668288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.668456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.668697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.668707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.668828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.669057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.669067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.669162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.669273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.669283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.669484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.669717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.669728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.669895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.670102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.670131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.670247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.670486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.670516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.670724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.670884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.670914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.671060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.671216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.671246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.671405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.671628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.671676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.671886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.672101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.672131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.672414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.672691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.394 [2024-04-17 10:29:50.672723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.394 qpair failed and we were unable to recover it. 00:33:17.394 [2024-04-17 10:29:50.672882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.673014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.673043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.395 qpair failed and we were unable to recover it. 00:33:17.395 [2024-04-17 10:29:50.673297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.673511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.673541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.395 qpair failed and we were unable to recover it. 00:33:17.395 [2024-04-17 10:29:50.673680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.673866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.673875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.395 qpair failed and we were unable to recover it. 00:33:17.395 [2024-04-17 10:29:50.674036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.674266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.674276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.395 qpair failed and we were unable to recover it. 00:33:17.395 [2024-04-17 10:29:50.674454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.674699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.674711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.395 qpair failed and we were unable to recover it. 00:33:17.395 [2024-04-17 10:29:50.674881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.674997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.675007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.395 qpair failed and we were unable to recover it. 00:33:17.395 [2024-04-17 10:29:50.675290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.675463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.675473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.395 qpair failed and we were unable to recover it. 00:33:17.395 [2024-04-17 10:29:50.675632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.675731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.675743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.395 qpair failed and we were unable to recover it. 00:33:17.395 [2024-04-17 10:29:50.676001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.676089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.676099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.395 qpair failed and we were unable to recover it. 00:33:17.395 [2024-04-17 10:29:50.676340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.676452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.676463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.395 qpair failed and we were unable to recover it. 00:33:17.395 [2024-04-17 10:29:50.676569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.676806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.676847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.395 qpair failed and we were unable to recover it. 00:33:17.395 [2024-04-17 10:29:50.677064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.677260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.677270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.395 qpair failed and we were unable to recover it. 00:33:17.395 [2024-04-17 10:29:50.677507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.677721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.677752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.395 qpair failed and we were unable to recover it. 00:33:17.395 [2024-04-17 10:29:50.677930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.678096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.678107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.395 qpair failed and we were unable to recover it. 00:33:17.395 [2024-04-17 10:29:50.678271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.678481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.678511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.395 qpair failed and we were unable to recover it. 00:33:17.395 [2024-04-17 10:29:50.678732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.679029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.679059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.395 qpair failed and we were unable to recover it. 00:33:17.395 [2024-04-17 10:29:50.679317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.679550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.679560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.395 qpair failed and we were unable to recover it. 00:33:17.395 [2024-04-17 10:29:50.679654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.679768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.679782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.395 qpair failed and we were unable to recover it. 00:33:17.395 [2024-04-17 10:29:50.679993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.680102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.680112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.395 qpair failed and we were unable to recover it. 00:33:17.395 [2024-04-17 10:29:50.680341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.680439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.680450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.395 qpair failed and we were unable to recover it. 00:33:17.395 [2024-04-17 10:29:50.680609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.680782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.680792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.395 qpair failed and we were unable to recover it. 00:33:17.395 [2024-04-17 10:29:50.680974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.681084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.681095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.395 qpair failed and we were unable to recover it. 00:33:17.395 [2024-04-17 10:29:50.681265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.681443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.681453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.395 qpair failed and we were unable to recover it. 00:33:17.395 [2024-04-17 10:29:50.681624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.681823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.681854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.395 qpair failed and we were unable to recover it. 00:33:17.395 [2024-04-17 10:29:50.682085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.682291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.682324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.395 qpair failed and we were unable to recover it. 00:33:17.395 [2024-04-17 10:29:50.682553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.682766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.682798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.395 qpair failed and we were unable to recover it. 00:33:17.395 [2024-04-17 10:29:50.682962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.683171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.395 [2024-04-17 10:29:50.683200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.395 qpair failed and we were unable to recover it. 00:33:17.670 [2024-04-17 10:29:50.683530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.670 [2024-04-17 10:29:50.683729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.670 [2024-04-17 10:29:50.683742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.670 qpair failed and we were unable to recover it. 00:33:17.670 [2024-04-17 10:29:50.683979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.670 [2024-04-17 10:29:50.684164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.670 [2024-04-17 10:29:50.684174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.670 qpair failed and we were unable to recover it. 00:33:17.670 [2024-04-17 10:29:50.684346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.670 [2024-04-17 10:29:50.684524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.670 [2024-04-17 10:29:50.684535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.670 qpair failed and we were unable to recover it. 00:33:17.670 [2024-04-17 10:29:50.684706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.670 [2024-04-17 10:29:50.684899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.670 [2024-04-17 10:29:50.684910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.670 qpair failed and we were unable to recover it. 00:33:17.670 [2024-04-17 10:29:50.685071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.670 [2024-04-17 10:29:50.685242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.685252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.685430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.685595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.685608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.685854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.685965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.685976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.686174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.686298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.686309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.686472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.686660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.686670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.686873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.686987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.686998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.687123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.687228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.687239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.687370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.687608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.687638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.687876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.688023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.688053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.688203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.688322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.688332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.688532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.688826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.688856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.689012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.689225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.689234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.689395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.689570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.689580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.689810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.689991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.690001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.690121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.690234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.690245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.690355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.690604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.690634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.690848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.691069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.691098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.691428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.691600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.691610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.691773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.692004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.692014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.692193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.692422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.692433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.692616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.692720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.692730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.692843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.692954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.692964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.693163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.693351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.693361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.693471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.693635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.693648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.693809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.693931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.693941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.694213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.694430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.694460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.694603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.694825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.694857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.695093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.695246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.695276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.695497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.695661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.671 [2024-04-17 10:29:50.695692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.671 qpair failed and we were unable to recover it. 00:33:17.671 [2024-04-17 10:29:50.695942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.696097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.696127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.672 qpair failed and we were unable to recover it. 00:33:17.672 [2024-04-17 10:29:50.696326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.696580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.696590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.672 qpair failed and we were unable to recover it. 00:33:17.672 [2024-04-17 10:29:50.696769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.696942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.696952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.672 qpair failed and we were unable to recover it. 00:33:17.672 [2024-04-17 10:29:50.697138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.697300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.697310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.672 qpair failed and we were unable to recover it. 00:33:17.672 [2024-04-17 10:29:50.697594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.697773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.697783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.672 qpair failed and we were unable to recover it. 00:33:17.672 [2024-04-17 10:29:50.697894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.697997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.698007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.672 qpair failed and we were unable to recover it. 00:33:17.672 [2024-04-17 10:29:50.698114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.698207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.698217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.672 qpair failed and we were unable to recover it. 00:33:17.672 [2024-04-17 10:29:50.698374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.698557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.698567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.672 qpair failed and we were unable to recover it. 00:33:17.672 [2024-04-17 10:29:50.698676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.698796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.698806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.672 qpair failed and we were unable to recover it. 00:33:17.672 [2024-04-17 10:29:50.698965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.699155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.699165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.672 qpair failed and we were unable to recover it. 00:33:17.672 [2024-04-17 10:29:50.699289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.699491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.699521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.672 qpair failed and we were unable to recover it. 00:33:17.672 [2024-04-17 10:29:50.699688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.699838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.699869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.672 qpair failed and we were unable to recover it. 00:33:17.672 [2024-04-17 10:29:50.700014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.700158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.700188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.672 qpair failed and we were unable to recover it. 00:33:17.672 [2024-04-17 10:29:50.700421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.700624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.700665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.672 qpair failed and we were unable to recover it. 00:33:17.672 [2024-04-17 10:29:50.700945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.701234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.701244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.672 qpair failed and we were unable to recover it. 00:33:17.672 [2024-04-17 10:29:50.701354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.701452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.701462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.672 qpair failed and we were unable to recover it. 00:33:17.672 [2024-04-17 10:29:50.701652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.701753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.701764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.672 qpair failed and we were unable to recover it. 00:33:17.672 [2024-04-17 10:29:50.701897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.702028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.702057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.672 qpair failed and we were unable to recover it. 00:33:17.672 [2024-04-17 10:29:50.702287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.702454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.702484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.672 qpair failed and we were unable to recover it. 00:33:17.672 [2024-04-17 10:29:50.702686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.702847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.702857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.672 qpair failed and we were unable to recover it. 00:33:17.672 [2024-04-17 10:29:50.702974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.703169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.703179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.672 qpair failed and we were unable to recover it. 00:33:17.672 [2024-04-17 10:29:50.703358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.703479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.703489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.672 qpair failed and we were unable to recover it. 00:33:17.672 [2024-04-17 10:29:50.703690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.703852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.703862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.672 qpair failed and we were unable to recover it. 00:33:17.672 [2024-04-17 10:29:50.703967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.704128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.704138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.672 qpair failed and we were unable to recover it. 00:33:17.672 [2024-04-17 10:29:50.704249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.704407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.704417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.672 qpair failed and we were unable to recover it. 00:33:17.672 [2024-04-17 10:29:50.704598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.704757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.704786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.672 qpair failed and we were unable to recover it. 00:33:17.672 [2024-04-17 10:29:50.705015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.705308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.672 [2024-04-17 10:29:50.705338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.705491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.705652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.705662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.705837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.706024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.706054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.706352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.706583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.706612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.706899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.707159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.707196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.707493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.707783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.707817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.708152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.708453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.708484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.708798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.709024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.709055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.709271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.709458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.709469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.709557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.709672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.709682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.709803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.709999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.710010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.710191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.710499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.710529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.710773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.710988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.711018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.711172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.711465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.711495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.711633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.711890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.711921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.712198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.712423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.712453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b60 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.712606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.712841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.712853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.713034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.713264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.713274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.713511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.713626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.713636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.713830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.714022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.714032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.714213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.714387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.714397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.714662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.714866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.714876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.714978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.715218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.715250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.715551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.715820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.715851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.716159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.716431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.716461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.716671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.716882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.716892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.717066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.717224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.717234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.717501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.717684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.717694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.717887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.718008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.718018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.673 qpair failed and we were unable to recover it. 00:33:17.673 [2024-04-17 10:29:50.718186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.673 [2024-04-17 10:29:50.718351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.718362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.718594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.718692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.718702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.718823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.719111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.719121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.719309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.719456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.719485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.719730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.719939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.719968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.720133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.720276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.720306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.720518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.720617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.720627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.720796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.720915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.720925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.721184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.721358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.721367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.721489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.721661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.721672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.721854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.722146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.722176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.722330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.722558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.722568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.722686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.722848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.722858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.723032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.723200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.723210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.723331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.723428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.723439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.723668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.723941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.723972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.724216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.724422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.724452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.724667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.724842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.724851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.725022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.725229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.725239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.725351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.725545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.725556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.725667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.725762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.725772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.725936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.726042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.726052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.726165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.726324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.726334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.726448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.726609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.726620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.726736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.726943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.726953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.727055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.727177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.727187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.727351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.727594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.727604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.727783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.728018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.728028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.674 [2024-04-17 10:29:50.728169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.728314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.674 [2024-04-17 10:29:50.728343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.674 qpair failed and we were unable to recover it. 00:33:17.675 [2024-04-17 10:29:50.728555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.728722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.728752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.675 qpair failed and we were unable to recover it. 00:33:17.675 [2024-04-17 10:29:50.729024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.729293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.729322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.675 qpair failed and we were unable to recover it. 00:33:17.675 [2024-04-17 10:29:50.729533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.729689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.729720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.675 qpair failed and we were unable to recover it. 00:33:17.675 [2024-04-17 10:29:50.729986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.730127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.730157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.675 qpair failed and we were unable to recover it. 00:33:17.675 [2024-04-17 10:29:50.730355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.730455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.730469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.675 qpair failed and we were unable to recover it. 00:33:17.675 [2024-04-17 10:29:50.730596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.730800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.730810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.675 qpair failed and we were unable to recover it. 00:33:17.675 [2024-04-17 10:29:50.731064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.731322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.731332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.675 qpair failed and we were unable to recover it. 00:33:17.675 [2024-04-17 10:29:50.731450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.731611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.731621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.675 qpair failed and we were unable to recover it. 00:33:17.675 [2024-04-17 10:29:50.731797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.731902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.731912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.675 qpair failed and we were unable to recover it. 00:33:17.675 [2024-04-17 10:29:50.732095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.732327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.732337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.675 qpair failed and we were unable to recover it. 00:33:17.675 [2024-04-17 10:29:50.732454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.732657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.732668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.675 qpair failed and we were unable to recover it. 00:33:17.675 [2024-04-17 10:29:50.732894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.732986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.732997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.675 qpair failed and we were unable to recover it. 00:33:17.675 [2024-04-17 10:29:50.733126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.733304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.733314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.675 qpair failed and we were unable to recover it. 00:33:17.675 [2024-04-17 10:29:50.733427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.733657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.733688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.675 qpair failed and we were unable to recover it. 00:33:17.675 [2024-04-17 10:29:50.733894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.734130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.734166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.675 qpair failed and we were unable to recover it. 00:33:17.675 [2024-04-17 10:29:50.734389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.734493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.734522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.675 qpair failed and we were unable to recover it. 00:33:17.675 [2024-04-17 10:29:50.734770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.735045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.735074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.675 qpair failed and we were unable to recover it. 00:33:17.675 [2024-04-17 10:29:50.735281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.735571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.735581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.675 qpair failed and we were unable to recover it. 00:33:17.675 [2024-04-17 10:29:50.735859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.736035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.736045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.675 qpair failed and we were unable to recover it. 00:33:17.675 [2024-04-17 10:29:50.736220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.736330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.736340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.675 qpair failed and we were unable to recover it. 00:33:17.675 [2024-04-17 10:29:50.736504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.736762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.736772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.675 qpair failed and we were unable to recover it. 00:33:17.675 [2024-04-17 10:29:50.736992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.737163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.737173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.675 qpair failed and we were unable to recover it. 00:33:17.675 [2024-04-17 10:29:50.737353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.675 [2024-04-17 10:29:50.737556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.737586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.737743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.738038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.738068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.738278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.738516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.738552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.738772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.738908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.738937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.739091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.739304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.739333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.739679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.739809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.739819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.739931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.740110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.740120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.740364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.740541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.740550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.740720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.740883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.740894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.741066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.741191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.741220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.741366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.741586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.741619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.741943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.742278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.742309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.742521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.742728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.742758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.742978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.743185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.743215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.743374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.743487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.743497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.743592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.743714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.743725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.743924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.744118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.744129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.744305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.744542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.744552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.744722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.744816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.744827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.744947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.745187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.745198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.745363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.745602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.745612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.745780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.745909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.745920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.746001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.746114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.746124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.746231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.746506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.746516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.746630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.746740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.746750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.746860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.746955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.746964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.747066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.747261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.747271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.747385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.747546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.747556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.676 [2024-04-17 10:29:50.747744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.747907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.676 [2024-04-17 10:29:50.747918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.676 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.748101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.748215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.748225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.748334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.748521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.748551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.748770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.748981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.749010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.749169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.749302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.749331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.749585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.749746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.749756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.749922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.750089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.750100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.750262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.750460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.750470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.750700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.750954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.750965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.751127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.751355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.751365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.751537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.751648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.751658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.751915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.752082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.752112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.752413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.752639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.752689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.752932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.753168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.753198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.753345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.753438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.753449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.753614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.753754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.753765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.753928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.754038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.754048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.754156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.754273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.754283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.754458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.754568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.754577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.754681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.754911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.754921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.755155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.755255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.755265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.755428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.755686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.755697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.755828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.756006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.756016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.756229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.756428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.756457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.756732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.757004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.757033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.757242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.757471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.757501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.757800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.758129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.758159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.758463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.758681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.758711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.758993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.759138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.759167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.677 qpair failed and we were unable to recover it. 00:33:17.677 [2024-04-17 10:29:50.759318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.677 [2024-04-17 10:29:50.759428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.759437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.759614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.759774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.759784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.759961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.760260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.760270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.760466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.760656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.760667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.760768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.760944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.760954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.761186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.761290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.761300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.761408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.761613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.761654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.761881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.762031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.762061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.762345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.762487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.762498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.762669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.762778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.762788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.762974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.763188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.763218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.763461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.763588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.763598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.763880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.764043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.764067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.764222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.764433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.764462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.764599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.764900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.764931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.765100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.765338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.765367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.765521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.765733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.765744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.765909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.766012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.766022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.766118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.766227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.766239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.766437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.766616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.766626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.766732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.766831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.766841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.767007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.767134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.767144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.767246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.767424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.767433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.767627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.767836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.767846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.767954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.768043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.768054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.768292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.768446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.768476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.768731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.769026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.769056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.769218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.769358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.769388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.769691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.769898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.678 [2024-04-17 10:29:50.769928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.678 qpair failed and we were unable to recover it. 00:33:17.678 [2024-04-17 10:29:50.770221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.770484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.770515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.770701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.770874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.770903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.771121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.771420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.771450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.771697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.771916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.771926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.772024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.772124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.772134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.772308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.772398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.772408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.772589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.772695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.772705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.772800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.773074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.773084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.773335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.773625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.773635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.773820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.773978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.774008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.774335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.774548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.774580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.774733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.774950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.774979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.775139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.775372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.775401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.775568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.775845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.775855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.776054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.776162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.776172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.776436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.776530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.776540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.776641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.776816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.776827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.776999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.777099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.777108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.777213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.777449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.777459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.777628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.777745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.777755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.777844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.778005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.778015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.778272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.778384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.778395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.778490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.778665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.778677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.778870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.779031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.779040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.779159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.779389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.779399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.779674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.779817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.779846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.780050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.780183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.780213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.780433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.780677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.780687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.679 [2024-04-17 10:29:50.780794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.780896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.679 [2024-04-17 10:29:50.780906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.679 qpair failed and we were unable to recover it. 00:33:17.680 [2024-04-17 10:29:50.780997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.781232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.781242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.680 qpair failed and we were unable to recover it. 00:33:17.680 [2024-04-17 10:29:50.781412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.781589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.781599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.680 qpair failed and we were unable to recover it. 00:33:17.680 [2024-04-17 10:29:50.781690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.781810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.781820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.680 qpair failed and we were unable to recover it. 00:33:17.680 [2024-04-17 10:29:50.782004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.782198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.782208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.680 qpair failed and we were unable to recover it. 00:33:17.680 [2024-04-17 10:29:50.782325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.782430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.782442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.680 qpair failed and we were unable to recover it. 00:33:17.680 [2024-04-17 10:29:50.782544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.782805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.782816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.680 qpair failed and we were unable to recover it. 00:33:17.680 [2024-04-17 10:29:50.782943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.783127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.783157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.680 qpair failed and we were unable to recover it. 00:33:17.680 [2024-04-17 10:29:50.783282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.783428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.783459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.680 qpair failed and we were unable to recover it. 00:33:17.680 [2024-04-17 10:29:50.783662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.783838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.783848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.680 qpair failed and we were unable to recover it. 00:33:17.680 [2024-04-17 10:29:50.783944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.784105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.784116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.680 qpair failed and we were unable to recover it. 00:33:17.680 [2024-04-17 10:29:50.784397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.784599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.784629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.680 qpair failed and we were unable to recover it. 00:33:17.680 [2024-04-17 10:29:50.784789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.785015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.785045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.680 qpair failed and we were unable to recover it. 00:33:17.680 [2024-04-17 10:29:50.785317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.785518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.785548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.680 qpair failed and we were unable to recover it. 00:33:17.680 [2024-04-17 10:29:50.785707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.785958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.785968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.680 qpair failed and we were unable to recover it. 00:33:17.680 [2024-04-17 10:29:50.786231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.786353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.786363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.680 qpair failed and we were unable to recover it. 00:33:17.680 [2024-04-17 10:29:50.786539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.786797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.786808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.680 qpair failed and we were unable to recover it. 00:33:17.680 [2024-04-17 10:29:50.786906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.787094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.787104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.680 qpair failed and we were unable to recover it. 00:33:17.680 [2024-04-17 10:29:50.787222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.787330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.787339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.680 qpair failed and we were unable to recover it. 00:33:17.680 [2024-04-17 10:29:50.787514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.787746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.787761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.680 qpair failed and we were unable to recover it. 00:33:17.680 [2024-04-17 10:29:50.787943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.788120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.788131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.680 qpair failed and we were unable to recover it. 00:33:17.680 [2024-04-17 10:29:50.788274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.788477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.788488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.680 qpair failed and we were unable to recover it. 00:33:17.680 [2024-04-17 10:29:50.788672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.788886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.788916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.680 qpair failed and we were unable to recover it. 00:33:17.680 [2024-04-17 10:29:50.789219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.789364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.789394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.680 qpair failed and we were unable to recover it. 00:33:17.680 [2024-04-17 10:29:50.789553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.789676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.680 [2024-04-17 10:29:50.789687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.680 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.789789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.790033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.790063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.790363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.790584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.790614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.790846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.791052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.791081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.791353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.791502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.791539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.791657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.791914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.791926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.792115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.792277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.792288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.792448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.792623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.792633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.792809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.792938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.792948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.793194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.793366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.793376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.793471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.793634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.793649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.793830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.794002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.794012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.794243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.794482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.794511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.794764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.794862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.794872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.795046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.795240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.795250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.795455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.795628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.795639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.795774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.795941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.795951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.796052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.796144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.796154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.796280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.796431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.796440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.796641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.796748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.796759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.796856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.796973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.796984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.797230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.797380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.797410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.797692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.797850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.797879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.798102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.798338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.798348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.798640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.798933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.798962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.799134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.799284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.799320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.799620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.799756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.799766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.799858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.800065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.800075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.800334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.800444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.800454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.681 qpair failed and we were unable to recover it. 00:33:17.681 [2024-04-17 10:29:50.800629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.681 [2024-04-17 10:29:50.800806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.800818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.801010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.801124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.801134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.801321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.801500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.801510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.801609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.801725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.801736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.801994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.802182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.802192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.802349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.802603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.802613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.802792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.802956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.802966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.803105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.803309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.803339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.803515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.803815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.803825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.803997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.804251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.804261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.804459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.804640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.804653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.804888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.804992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.805001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.805165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.805398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.805408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.805696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.805817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.805829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.805940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.806159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.806189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.806396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.806607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.806637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.806964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.807129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.807158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.807417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.807601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.807611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.807865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.808068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.808098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.808314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.808466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.808496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.808710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.808996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.809027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.809239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.809436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.809446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.809634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.809814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.809824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.810073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.810220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.810249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.810527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.810745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.810755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.810875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.811023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.811053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.811242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.811404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.811433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.811597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.811803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.811814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.811973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.812202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.682 [2024-04-17 10:29:50.812212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.682 qpair failed and we were unable to recover it. 00:33:17.682 [2024-04-17 10:29:50.812349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.812488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.812498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.812671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.812901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.812912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.813140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.813235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.813246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.813452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.813627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.813637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.813885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.814062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.814073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.814246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.814352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.814362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.814479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.814730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.814741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.814852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.814972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.814982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.815119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.815231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.815261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.815466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.815674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.815705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.815915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.816131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.816160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.816311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.816523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.816532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.816702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.816918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.816948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.817159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.817368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.817377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.817632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.817914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.817924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.818082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.818322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.818351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.818666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.818812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.818842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.819062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.819361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.819391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.819684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.819837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.819867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.820034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.820240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.820269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.820506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.820680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.820691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.820799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.820993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.821004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.821162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.821342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.821352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.821582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.821753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.821764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.822004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.822168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.822178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.822302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.822552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.822562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.822734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.822905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.822916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.823030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.823259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.823271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.683 [2024-04-17 10:29:50.823429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.823617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.683 [2024-04-17 10:29:50.823627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.683 qpair failed and we were unable to recover it. 00:33:17.684 [2024-04-17 10:29:50.823842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.824047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.824077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.684 qpair failed and we were unable to recover it. 00:33:17.684 [2024-04-17 10:29:50.824285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.824555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.824584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.684 qpair failed and we were unable to recover it. 00:33:17.684 [2024-04-17 10:29:50.824728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.824910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.824920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.684 qpair failed and we were unable to recover it. 00:33:17.684 [2024-04-17 10:29:50.825148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.825322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.825332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.684 qpair failed and we were unable to recover it. 00:33:17.684 [2024-04-17 10:29:50.825574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.825695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.825721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.684 qpair failed and we were unable to recover it. 00:33:17.684 [2024-04-17 10:29:50.825934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.826110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.826120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.684 qpair failed and we were unable to recover it. 00:33:17.684 [2024-04-17 10:29:50.826235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.826421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.826431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.684 qpair failed and we were unable to recover it. 00:33:17.684 [2024-04-17 10:29:50.826689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.826864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.826874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.684 qpair failed and we were unable to recover it. 00:33:17.684 [2024-04-17 10:29:50.826981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.827175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.827185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.684 qpair failed and we were unable to recover it. 00:33:17.684 [2024-04-17 10:29:50.827340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.827616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.827626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.684 qpair failed and we were unable to recover it. 00:33:17.684 [2024-04-17 10:29:50.827805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.827904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.827914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.684 qpair failed and we were unable to recover it. 00:33:17.684 [2024-04-17 10:29:50.828035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.828122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.828133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.684 qpair failed and we were unable to recover it. 00:33:17.684 [2024-04-17 10:29:50.828363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.828540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.828550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.684 qpair failed and we were unable to recover it. 00:33:17.684 [2024-04-17 10:29:50.828746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.828864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.828874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.684 qpair failed and we were unable to recover it. 00:33:17.684 [2024-04-17 10:29:50.828962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.829139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.829149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.684 qpair failed and we were unable to recover it. 00:33:17.684 [2024-04-17 10:29:50.829337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.829466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.829477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.684 qpair failed and we were unable to recover it. 00:33:17.684 [2024-04-17 10:29:50.829639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.829770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.829781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.684 qpair failed and we were unable to recover it. 00:33:17.684 [2024-04-17 10:29:50.830041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.830218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.830229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.684 qpair failed and we were unable to recover it. 00:33:17.684 [2024-04-17 10:29:50.830404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.830525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.830535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.684 qpair failed and we were unable to recover it. 00:33:17.684 [2024-04-17 10:29:50.830651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.830778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.830808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.684 qpair failed and we were unable to recover it. 00:33:17.684 [2024-04-17 10:29:50.831016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.831163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.831193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.684 qpair failed and we were unable to recover it. 00:33:17.684 [2024-04-17 10:29:50.831359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.831553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.831564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.684 qpair failed and we were unable to recover it. 00:33:17.684 [2024-04-17 10:29:50.831698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.831903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.684 [2024-04-17 10:29:50.831933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.684 qpair failed and we were unable to recover it. 00:33:17.684 [2024-04-17 10:29:50.832232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.832445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.832475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.832701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.832992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.833002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.833132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.833304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.833314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.833544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.833706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.833717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.833838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.834043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.834054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.834309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.834416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.834445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.834733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.835024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.835054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.835345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.835545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.835574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.835876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.836036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.836046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.836157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.836338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.836349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.836531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.836637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.836651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.836748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.836870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.836879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.837110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.837340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.837350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.837529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.837711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.837721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.837851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.837942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.837953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.838184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.838467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.838477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.838590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.838688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.838697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.838931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.839035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.839046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.839243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.839426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.839455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.839585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.839853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.839885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.840108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.840250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.840279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.840480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.840617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.840627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.840730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.840897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.840907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.841008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.841166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.841176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.841355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.841557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.841567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.841696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.841836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.841845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.841999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.842214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.842234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.842410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.842686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.842720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.685 [2024-04-17 10:29:50.842896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.843096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.685 [2024-04-17 10:29:50.843126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.685 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.843338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.843492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.843522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.843742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.843974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.844006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.844217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.844487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.844517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.844732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.845022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.845059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.845366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.845529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.845559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.845707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.845886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.845917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.846138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.846432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.846467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.846715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.846899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.846911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.847026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.847280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.847291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.847465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.847712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.847723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.847906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.848023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.848033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.848203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.848432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.848442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.848614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.848714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.848725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.848968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.849143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.849153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.849262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.849452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.849464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.849632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.849847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.849877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.850067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.850336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.850366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.850613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.850735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.850773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.850922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.851060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.851090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.851268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.851570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.851611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.851801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.851890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.851901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.852082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.852259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.852269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.852375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.852538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.852548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.852717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.852892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.852902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.853092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.853295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.853305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.853488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.853672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.853704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.853930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.854079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.854109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.854327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.854529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.854566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.686 qpair failed and we were unable to recover it. 00:33:17.686 [2024-04-17 10:29:50.854773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.855042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.686 [2024-04-17 10:29:50.855072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.855228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.855387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.855421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.855636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.855945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.855955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.856218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.856376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.856386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.856561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.856635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.856650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.856764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.856886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.856897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.857003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.857136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.857148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.857383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.857582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.857592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.857706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.857885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.857899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.858074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.858235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.858248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.858417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.858525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.858535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.858630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.858751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.858760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.858867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.859036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.859046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.859212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.859440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.859451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.859633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.859796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.859807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.859930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.860211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.860221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.860317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.860548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.860558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.860732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.860915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.860925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.861094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.861270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.861281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.861510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.861697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.861708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.861940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.862120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.862130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.862232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.862344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.862354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.862458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.862553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.862563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.862722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.862828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.862839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.863037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.863157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.863167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.863272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.863377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.863387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.863494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.863760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.863771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.863942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.864073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.864084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.864213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.864394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.864404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.687 qpair failed and we were unable to recover it. 00:33:17.687 [2024-04-17 10:29:50.864674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.687 [2024-04-17 10:29:50.864789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.864799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.864912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.865138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.865148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.865257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.865436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.865447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.865546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.865715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.865726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.865837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.865943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.865953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.866133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.866318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.866327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.866518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.866749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.866759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.866830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.866927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.866937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.867109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.867284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.867294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.867462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.867579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.867589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.867694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.867800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.867810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.867974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.868146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.868157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.868322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.868498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.868508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.868625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.868738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.868749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.868905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.869157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.869167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.869428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.869535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.869544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.869651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.869840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.869850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.870053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.870223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.870233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.870394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.870569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.870580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.870759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.870934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.870943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.871052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.871278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.871287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.871393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.871565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.871575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.871769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.871967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.871977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.872156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.872268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.872278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.872392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.872502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.872513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.872618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.872779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.872790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.872988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.873161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.873171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.873338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.873591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.873601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.873830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.873957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.688 [2024-04-17 10:29:50.873967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.688 qpair failed and we were unable to recover it. 00:33:17.688 [2024-04-17 10:29:50.874128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.874305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.874316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.689 qpair failed and we were unable to recover it. 00:33:17.689 [2024-04-17 10:29:50.874490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.874595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.874604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.689 qpair failed and we were unable to recover it. 00:33:17.689 [2024-04-17 10:29:50.874822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.874931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.874940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.689 qpair failed and we were unable to recover it. 00:33:17.689 [2024-04-17 10:29:50.875053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.875238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.875248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.689 qpair failed and we were unable to recover it. 00:33:17.689 [2024-04-17 10:29:50.875412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.875601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.875611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.689 qpair failed and we were unable to recover it. 00:33:17.689 [2024-04-17 10:29:50.875855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.875966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.875976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.689 qpair failed and we were unable to recover it. 00:33:17.689 [2024-04-17 10:29:50.876247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.876476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.876486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.689 qpair failed and we were unable to recover it. 00:33:17.689 [2024-04-17 10:29:50.876590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.876715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.876726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.689 qpair failed and we were unable to recover it. 00:33:17.689 [2024-04-17 10:29:50.876827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.877055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.877065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.689 qpair failed and we were unable to recover it. 00:33:17.689 [2024-04-17 10:29:50.877229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.877394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.877404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.689 qpair failed and we were unable to recover it. 00:33:17.689 [2024-04-17 10:29:50.877575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.877719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.877730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.689 qpair failed and we were unable to recover it. 00:33:17.689 [2024-04-17 10:29:50.877927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.878037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.878047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.689 qpair failed and we were unable to recover it. 00:33:17.689 [2024-04-17 10:29:50.878153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.878257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.878267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.689 qpair failed and we were unable to recover it. 00:33:17.689 [2024-04-17 10:29:50.878435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.878602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.878616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.689 qpair failed and we were unable to recover it. 00:33:17.689 [2024-04-17 10:29:50.878729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.878853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.878863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.689 qpair failed and we were unable to recover it. 00:33:17.689 [2024-04-17 10:29:50.879022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.879235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.879245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.689 qpair failed and we were unable to recover it. 00:33:17.689 [2024-04-17 10:29:50.879351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.879536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.879546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.689 qpair failed and we were unable to recover it. 00:33:17.689 [2024-04-17 10:29:50.879777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.879952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.879962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.689 qpair failed and we were unable to recover it. 00:33:17.689 [2024-04-17 10:29:50.880064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.880294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.880304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.689 qpair failed and we were unable to recover it. 00:33:17.689 [2024-04-17 10:29:50.880500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.880670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.880681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.689 qpair failed and we were unable to recover it. 00:33:17.689 [2024-04-17 10:29:50.880852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.881026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.881036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.689 qpair failed and we were unable to recover it. 00:33:17.689 [2024-04-17 10:29:50.881212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.881393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.881403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.689 qpair failed and we were unable to recover it. 00:33:17.689 [2024-04-17 10:29:50.881521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.881716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.881727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.689 qpair failed and we were unable to recover it. 00:33:17.689 [2024-04-17 10:29:50.881914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.882085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.882097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.689 qpair failed and we were unable to recover it. 00:33:17.689 [2024-04-17 10:29:50.882278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.689 [2024-04-17 10:29:50.882489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.882500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.882686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.882799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.882810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.882975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.883204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.883214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.883325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.883483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.883494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.883668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.883832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.883842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.884017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.884125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.884135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.884366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.884471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.884481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.884639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.884894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.884904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.885030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.885150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.885161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.885280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.885391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.885401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.885496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.885708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.885719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.885961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.886150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.886161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.886266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.886435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.886445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.886612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.886710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.886721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.886878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.887016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.887027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.887180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.887350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.887362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.887623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.887738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.887749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.887930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.888044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.888056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.888240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.888338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.888349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.888526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.888705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.888716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.888898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.889017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.889028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.889308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.889422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.889433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.889665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.889915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.889925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.890090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.890201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.890212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.890312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.890430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.890441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.890704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.890815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.890825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.890989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.891101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.891111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.891308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.891535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.891545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.891736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.891828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.690 [2024-04-17 10:29:50.891839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.690 qpair failed and we were unable to recover it. 00:33:17.690 [2024-04-17 10:29:50.891942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.892120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.892130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.892408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.892510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.892521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.892701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.892820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.892831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.892996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.893185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.893195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.893367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.893470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.893480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.893647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.893825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.893835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.894011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.894179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.894189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.894421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.894572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.894583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.894734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.894859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.894870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.894985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.895171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.895181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.895389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.895561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.895571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.895747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.895937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.895948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.896112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.896288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.896298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.896396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.896573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.896584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.896818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.896921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.896932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.897107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.897210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.897220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.897343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.897451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.897462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.897634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.897802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.897813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.897906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.898026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.898035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.898294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.898409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.898418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.898617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.898781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.898792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.898907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.899209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.899220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.899323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.899507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.899518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.899613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.899787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.899799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.899960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.900120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.900130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.900317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.900546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.900556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.900736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.900936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.900946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.901119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.901350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.901360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.691 qpair failed and we were unable to recover it. 00:33:17.691 [2024-04-17 10:29:50.901617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.901723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.691 [2024-04-17 10:29:50.901733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.901905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.902021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.902032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.902196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.902356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.902366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.902462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.902570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.902581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.902774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.902946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.902956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.903211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.903386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.903397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.903638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.903735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.903746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.903995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.904170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.904180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.904300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.904430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.904441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.904547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.904720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.904730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.904837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.904972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.904982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.905177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.905289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.905301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.905452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.905622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.905633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.905746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.905853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.905863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.906039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.906246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.906256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.906365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.906470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.906480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.906710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.906815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.906826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.906933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.907101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.907111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.907286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.907526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.907537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.907799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.907904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.907914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.908018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.908117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.908127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.908219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.908452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.908464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.908663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.908769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.908779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.908952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.909134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.909145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.909235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.909371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.909381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.909561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.909723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.909734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.909991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.910117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.910128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.910368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.910557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.910567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.910802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.910969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.910980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.692 qpair failed and we were unable to recover it. 00:33:17.692 [2024-04-17 10:29:50.911106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.692 [2024-04-17 10:29:50.911207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.911218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.693 qpair failed and we were unable to recover it. 00:33:17.693 [2024-04-17 10:29:50.911396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.911563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.911573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.693 qpair failed and we were unable to recover it. 00:33:17.693 [2024-04-17 10:29:50.911753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.911864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.911876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.693 qpair failed and we were unable to recover it. 00:33:17.693 [2024-04-17 10:29:50.912052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.912248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.912258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.693 qpair failed and we were unable to recover it. 00:33:17.693 [2024-04-17 10:29:50.912362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.912550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.912560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.693 qpair failed and we were unable to recover it. 00:33:17.693 [2024-04-17 10:29:50.912655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.912784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.912795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.693 qpair failed and we were unable to recover it. 00:33:17.693 [2024-04-17 10:29:50.913036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.913241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.913255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.693 qpair failed and we were unable to recover it. 00:33:17.693 [2024-04-17 10:29:50.913368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.913562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.913573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.693 qpair failed and we were unable to recover it. 00:33:17.693 [2024-04-17 10:29:50.913674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.913801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.913812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.693 qpair failed and we were unable to recover it. 00:33:17.693 [2024-04-17 10:29:50.913993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.914105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.914116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.693 qpair failed and we were unable to recover it. 00:33:17.693 [2024-04-17 10:29:50.914279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.914475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.914486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.693 qpair failed and we were unable to recover it. 00:33:17.693 [2024-04-17 10:29:50.914717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.914905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.914915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.693 qpair failed and we were unable to recover it. 00:33:17.693 [2024-04-17 10:29:50.915109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.915272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.915283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.693 qpair failed and we were unable to recover it. 00:33:17.693 [2024-04-17 10:29:50.915518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.915694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.915705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.693 qpair failed and we were unable to recover it. 00:33:17.693 [2024-04-17 10:29:50.915884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.916009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.916022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.693 qpair failed and we were unable to recover it. 00:33:17.693 [2024-04-17 10:29:50.916141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.916313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.916323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.693 qpair failed and we were unable to recover it. 00:33:17.693 [2024-04-17 10:29:50.916490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.916743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.916753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.693 qpair failed and we were unable to recover it. 00:33:17.693 [2024-04-17 10:29:50.916860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.917027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.917038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.693 qpair failed and we were unable to recover it. 00:33:17.693 [2024-04-17 10:29:50.917210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.917385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.917396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.693 qpair failed and we were unable to recover it. 00:33:17.693 [2024-04-17 10:29:50.917602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.917806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.917817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.693 qpair failed and we were unable to recover it. 00:33:17.693 [2024-04-17 10:29:50.917994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.918139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.918149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.693 qpair failed and we were unable to recover it. 00:33:17.693 [2024-04-17 10:29:50.918259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.918365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.918375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.693 qpair failed and we were unable to recover it. 00:33:17.693 [2024-04-17 10:29:50.918493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.918586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.918597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.693 qpair failed and we were unable to recover it. 00:33:17.693 [2024-04-17 10:29:50.918703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.918901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.918912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.693 qpair failed and we were unable to recover it. 00:33:17.693 [2024-04-17 10:29:50.919093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.919220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.693 [2024-04-17 10:29:50.919231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.919401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.919541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.919552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.919732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.919926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.919936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.920052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.920229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.920240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.920438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.920534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.920545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.920666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.920878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.920890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.921012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.921121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.921132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.921247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.921363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.921374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.921500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.921611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.921621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.921857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.922055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.922066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.922231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.922342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.922353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.922548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.922724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.922735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.922851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.923010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.923020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.923193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.923308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.923318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.923428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.923622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.923632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.923886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.924076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.924087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.924185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.924297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.924307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.924515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.924744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.924755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.924867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.924999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.925009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.925139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.925332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.925342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.925447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.925701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.925712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.925903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.926000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.926011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.926115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.926315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.926326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.926441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.926554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.926565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.926728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.926870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.926881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.927043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.927275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.927285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.927446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.927618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.927629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.927802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.928007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.928018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.928143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.928320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.928332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.694 qpair failed and we were unable to recover it. 00:33:17.694 [2024-04-17 10:29:50.928509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.694 [2024-04-17 10:29:50.928693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.928705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.928815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.928938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.928949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.929127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.929302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.929313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.929487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.929663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.929674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.929788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.930073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.930083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.930241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.930489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.930500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.930682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.930873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.930884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.931090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.931201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.931211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.931387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.931502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.931512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.931743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.931858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.931869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.932035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.932187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.932197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.932364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.932594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.932604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.932769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.933025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.933035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.933231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.933411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.933422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.933512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.933673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.933684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.933870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.933977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.933987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.934080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.934246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.934257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.934504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.934681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.934691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.934893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.934997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.935007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.935165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.935326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.935337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.935499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.935674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.935685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.935922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.936045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.936056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.936246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.936344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.936355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.936538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.936717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.936729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.936903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.937134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.937145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.937257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.937418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.937429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.937606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.937840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.937851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.938018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.938141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.938152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.938277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.938384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.695 [2024-04-17 10:29:50.938394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.695 qpair failed and we were unable to recover it. 00:33:17.695 [2024-04-17 10:29:50.938587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.938764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.938774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.939022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.939132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.939142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.939318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.939521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.939532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.939707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.939833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.939844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.940036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.940217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.940228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.940346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.940576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.940586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.940714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.940846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.940857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.941017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.941178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.941189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.941355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.941518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.941528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.941712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.941882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.941892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.942151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.942259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.942269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.942370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.942552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.942563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.942719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.942834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.942845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.942966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.943220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.943232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.943408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.943588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.943598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.943764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.943958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.943969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.944150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.944316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.944327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.944521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.944710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.944721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.944829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.945029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.945039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.945121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.945226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.945237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.945350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.945456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.945467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.945595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.945775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.945786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.945895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.946082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.946092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.946304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.946465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.946475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.946585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.946684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.946695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.946808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.946980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.946991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.947169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.947342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.947353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.947511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.947692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.947703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.696 qpair failed and we were unable to recover it. 00:33:17.696 [2024-04-17 10:29:50.947889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.948063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.696 [2024-04-17 10:29:50.948074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.948235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.948362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.948373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.948540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.948628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.948638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.948824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.948943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.948954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.949068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.949298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.949310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.949545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.949721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.949732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.949900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.950075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.950085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.950195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.950380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.950391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.950654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.950826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.950838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.950955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.951080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.951091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.951204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.951411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.951422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.951528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.951689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.951701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.951866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.951971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.951983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.952147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.952328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.952339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.952569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.952805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.952816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.952984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.953106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.953117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.953378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.953532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.953544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.953713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.953890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.953901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.954083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.954265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.954275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.954384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.954542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.954552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.954795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.955035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.955046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.955172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.955282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.955293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.955453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.955617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.955629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.955807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.955930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.955944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.956109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.956291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.956302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.956499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.956599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.956610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.956792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.956898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.956909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.957035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.957227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.957239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.957412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.957522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.697 [2024-04-17 10:29:50.957532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.697 qpair failed and we were unable to recover it. 00:33:17.697 [2024-04-17 10:29:50.957626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.957803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.957814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.698 qpair failed and we were unable to recover it. 00:33:17.698 [2024-04-17 10:29:50.957919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.958081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.958091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.698 qpair failed and we were unable to recover it. 00:33:17.698 [2024-04-17 10:29:50.958265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.958378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.958390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.698 qpair failed and we were unable to recover it. 00:33:17.698 [2024-04-17 10:29:50.958500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.958597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.958608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.698 qpair failed and we were unable to recover it. 00:33:17.698 [2024-04-17 10:29:50.958796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.958964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.958978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.698 qpair failed and we were unable to recover it. 00:33:17.698 [2024-04-17 10:29:50.959141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.959316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.959327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.698 qpair failed and we were unable to recover it. 00:33:17.698 [2024-04-17 10:29:50.959423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.959544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.959555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.698 qpair failed and we were unable to recover it. 00:33:17.698 [2024-04-17 10:29:50.959738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.959916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.959928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.698 qpair failed and we were unable to recover it. 00:33:17.698 [2024-04-17 10:29:50.960036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.960204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.960215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.698 qpair failed and we were unable to recover it. 00:33:17.698 [2024-04-17 10:29:50.960328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.960490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.960501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.698 qpair failed and we were unable to recover it. 00:33:17.698 [2024-04-17 10:29:50.960739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.960844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.960855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.698 qpair failed and we were unable to recover it. 00:33:17.698 [2024-04-17 10:29:50.960956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.961150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.961161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.698 qpair failed and we were unable to recover it. 00:33:17.698 [2024-04-17 10:29:50.961271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.961446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.961458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.698 qpair failed and we were unable to recover it. 00:33:17.698 [2024-04-17 10:29:50.961660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.961777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.961789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.698 qpair failed and we were unable to recover it. 00:33:17.698 [2024-04-17 10:29:50.961971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.962165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.962178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.698 qpair failed and we were unable to recover it. 00:33:17.698 [2024-04-17 10:29:50.962439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.962569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.962580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.698 qpair failed and we were unable to recover it. 00:33:17.698 [2024-04-17 10:29:50.962747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.963030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.963041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.698 qpair failed and we were unable to recover it. 00:33:17.698 [2024-04-17 10:29:50.963221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.963400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.963412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.698 qpair failed and we were unable to recover it. 00:33:17.698 [2024-04-17 10:29:50.963620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.963803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.963814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.698 qpair failed and we were unable to recover it. 00:33:17.698 [2024-04-17 10:29:50.964060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.964224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.964234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.698 qpair failed and we were unable to recover it. 00:33:17.698 [2024-04-17 10:29:50.964337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.964535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.964546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.698 qpair failed and we were unable to recover it. 00:33:17.698 [2024-04-17 10:29:50.964676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.964901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.964912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.698 qpair failed and we were unable to recover it. 00:33:17.698 [2024-04-17 10:29:50.965114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.965293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.965304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.698 qpair failed and we were unable to recover it. 00:33:17.698 [2024-04-17 10:29:50.965493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.965753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.965765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.698 qpair failed and we were unable to recover it. 00:33:17.698 [2024-04-17 10:29:50.965949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.698 [2024-04-17 10:29:50.966128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.966143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.966330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.966585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.966597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.966790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.966907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.966918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.967034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.967126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.967137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.967252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.967487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.967499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.967610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.967774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.967787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.967898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.968083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.968095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.968212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.968413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.968424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.968574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.968742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.968755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.968928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.969106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.969118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.969311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.969487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.969500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.969676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.969856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.969881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.969978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.970230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.970241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.970407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.970508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.970520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.970820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.970994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.971004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.971095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.971255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.971266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.971426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.971672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.971683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.971866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.972028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.972038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.972227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.972481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.972494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.972683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.972783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.972793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.972914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.973078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.973089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.973224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.973396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.973407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.973647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.973763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.973774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.973941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.974106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.974119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.974230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.974398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.974410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.974604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.974706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.974718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.974811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.974985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.974996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.975252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.975437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.975448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.975614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.975794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.975806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.699 qpair failed and we were unable to recover it. 00:33:17.699 [2024-04-17 10:29:50.975918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.699 [2024-04-17 10:29:50.976036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.976047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.976244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.976425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.976436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.976608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.976865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.976878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.976984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.977153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.977164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.977270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.977441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.977452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.977685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.977778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.977789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.977988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.978092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.978102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.978212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.978370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.978382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.978552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.978721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.978732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.978964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.979152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.979163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.979279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.979467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.979478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.979673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.979768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.979779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.979948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.980061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.980074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.980234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.980328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.980339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.980617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.980762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.980774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.980896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.981009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.981021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.981131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.981240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.981252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.981485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.981611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.981623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.981833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.982006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.982017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.982251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.982434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.982446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.982619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.982795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.982806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.982921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.983038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.983050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.983283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.983458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.983469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.983700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.983819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.983830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.983977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.984077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.984088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.984262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.984456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.984467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.984621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.984792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.984805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.985025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.985283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.985295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.985574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.985822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.700 [2024-04-17 10:29:50.985846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.700 qpair failed and we were unable to recover it. 00:33:17.700 [2024-04-17 10:29:50.986096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.701 [2024-04-17 10:29:50.986210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.701 [2024-04-17 10:29:50.986222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.701 qpair failed and we were unable to recover it. 00:33:17.977 [2024-04-17 10:29:50.986346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.977 [2024-04-17 10:29:50.986465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.977 [2024-04-17 10:29:50.986490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.977 qpair failed and we were unable to recover it. 00:33:17.977 [2024-04-17 10:29:50.986587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.977 [2024-04-17 10:29:50.986760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.977 [2024-04-17 10:29:50.986773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.977 qpair failed and we were unable to recover it. 00:33:17.977 [2024-04-17 10:29:50.986953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.977 [2024-04-17 10:29:50.987128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.977 [2024-04-17 10:29:50.987152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.977 qpair failed and we were unable to recover it. 00:33:17.977 [2024-04-17 10:29:50.987283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.977 [2024-04-17 10:29:50.987476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.977 [2024-04-17 10:29:50.987488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.977 qpair failed and we were unable to recover it. 00:33:17.977 [2024-04-17 10:29:50.987744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.977 [2024-04-17 10:29:50.987983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.977 [2024-04-17 10:29:50.987995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.977 qpair failed and we were unable to recover it. 00:33:17.977 [2024-04-17 10:29:50.988104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.977 [2024-04-17 10:29:50.988210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.977 [2024-04-17 10:29:50.988221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.977 qpair failed and we were unable to recover it. 00:33:17.977 [2024-04-17 10:29:50.988399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.977 [2024-04-17 10:29:50.988514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.977 [2024-04-17 10:29:50.988525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.977 qpair failed and we were unable to recover it. 00:33:17.977 [2024-04-17 10:29:50.988641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.977 [2024-04-17 10:29:50.988811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.977 [2024-04-17 10:29:50.988824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.977 qpair failed and we were unable to recover it. 00:33:17.977 [2024-04-17 10:29:50.988932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.977 [2024-04-17 10:29:50.989095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.977 [2024-04-17 10:29:50.989108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.977 qpair failed and we were unable to recover it. 00:33:17.977 [2024-04-17 10:29:50.989288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.977 [2024-04-17 10:29:50.989382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.977 [2024-04-17 10:29:50.989393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:50.989518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.989790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.989803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:50.989896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.990058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.990068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:50.990328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.990431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.990442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:50.990610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.990723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.990734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:50.990835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.991015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.991026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:50.991258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.991432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.991443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:50.991626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.991751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.991762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:50.992022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.992128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.992150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:50.992386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.992499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.992511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:50.992693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.992860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.992871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:50.992989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.993098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.993109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:50.993227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.993337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.993349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:50.993449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.993595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.993606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:50.993788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.993900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.993911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:50.994107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.994312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.994323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:50.994586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.994730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.994761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:50.995030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.995138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.995148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:50.995321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.995535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.995567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:50.995718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.995929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.995964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:50.996190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.996305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.996346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:50.996658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.996923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.996933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:50.997052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.997283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.997294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:50.997490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.997606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.997637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:50.997791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.997944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.997973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:50.998291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.998505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.998535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:50.999634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.999849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:50.999861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:51.000148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:51.000419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:51.000449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:51.000748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:51.000953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:51.000982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-04-17 10:29:51.001146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.978 [2024-04-17 10:29:51.001288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.001316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.001461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.001694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.001725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.001945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.002181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.002211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.002515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.002757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.002788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.002979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.003193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.003203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.003382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.003673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.003705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.003861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.004153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.004184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.004335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.004616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.004655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.004877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.005050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.005080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.005232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.005501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.005530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.005834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.006111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.006141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.006367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.006517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.006547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.006822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.006965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.006994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.007148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.007293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.007304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.007428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.007594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.007624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.007787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.008026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.008062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.008222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.008451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.008461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.008627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.008794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.008806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.008948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.009137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.009167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.009323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.009533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.009563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.009779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.010046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.010084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.010264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.010530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.010559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.010717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.010850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.010880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.011101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.011372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.011401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.011575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.011838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.011870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.012092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.012250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.012280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.012482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.012620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.012659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.012807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.013012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.013041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-04-17 10:29:51.013263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.013421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.979 [2024-04-17 10:29:51.013450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.013754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.013900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.013930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.014199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.014380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.014410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.014626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.014781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.014812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.015020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.015230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.015260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.015401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.015710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.015740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.015908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.016115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.016149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.016300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.016418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.016428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.016648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.016871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.016881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.017078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.017244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.017274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.017495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.017770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.017800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.017957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.018096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.018126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.018274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.018429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.018439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.018535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.018641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.018659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.018768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.018971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.018982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.019075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.019186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.019197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.019322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.019567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.019603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.019830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.019983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.020014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.020299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.020528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.020538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.020743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.020858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.020888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.021107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.021312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.021342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.021579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.021744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.021775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.021929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.022164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.022194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.022345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.022455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.022465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.022577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.022811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.022842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.023168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.023389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.023418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.023627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.023856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.023892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.024084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.024198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.024228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.024455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.024664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.024696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-04-17 10:29:51.024857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.980 [2024-04-17 10:29:51.024996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.025026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.025170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.025480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.025510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.025728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.025938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.025949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.026176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.026317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.026347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.026501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.026638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.026678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.026927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.027058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.027089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.027382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.027508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.027539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.027696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.027930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.027960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.028178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.028392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.028422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.028606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.028906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.028916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.029060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.029233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.029263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.029473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.029611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.029641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.029884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.030104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.030134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.030340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.030637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.030677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.031000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.031205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.031234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.031394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.031489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.031498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.031625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.031871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.031902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.032064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.032210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.032240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.032455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.032597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.032627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.032878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.033084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.033114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.033276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.033544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.033573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.033740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.033888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.033918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.034121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.035043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.035066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.035256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.035417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.035428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.035602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.036235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.036255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.036438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.036515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.036525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.036687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.037321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.037340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.037513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.037724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.037756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.981 [2024-04-17 10:29:51.037899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.038167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.981 [2024-04-17 10:29:51.038197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.981 qpair failed and we were unable to recover it. 00:33:17.982 [2024-04-17 10:29:51.038420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.038620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.038659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.982 qpair failed and we were unable to recover it. 00:33:17.982 [2024-04-17 10:29:51.038942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.039098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.039129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.982 qpair failed and we were unable to recover it. 00:33:17.982 [2024-04-17 10:29:51.039450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.039669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.039700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.982 qpair failed and we were unable to recover it. 00:33:17.982 [2024-04-17 10:29:51.040652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.040847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.040860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.982 qpair failed and we were unable to recover it. 00:33:17.982 [2024-04-17 10:29:51.041037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.041152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.041183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.982 qpair failed and we were unable to recover it. 00:33:17.982 [2024-04-17 10:29:51.041486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.041725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.041758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.982 qpair failed and we were unable to recover it. 00:33:17.982 [2024-04-17 10:29:51.041965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.042172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.042202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.982 qpair failed and we were unable to recover it. 00:33:17.982 [2024-04-17 10:29:51.042356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.042493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.042522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.982 qpair failed and we were unable to recover it. 00:33:17.982 [2024-04-17 10:29:51.042672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.042834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.042864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.982 qpair failed and we were unable to recover it. 00:33:17.982 [2024-04-17 10:29:51.043173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.043443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.043473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.982 qpair failed and we were unable to recover it. 00:33:17.982 [2024-04-17 10:29:51.043682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.043912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.043942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.982 qpair failed and we were unable to recover it. 00:33:17.982 [2024-04-17 10:29:51.044090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.044187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.044196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.982 qpair failed and we were unable to recover it. 00:33:17.982 [2024-04-17 10:29:51.044297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.044483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.044493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.982 qpair failed and we were unable to recover it. 00:33:17.982 [2024-04-17 10:29:51.044655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.044775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.044805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.982 qpair failed and we were unable to recover it. 00:33:17.982 [2024-04-17 10:29:51.044946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.045165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.045194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.982 qpair failed and we were unable to recover it. 00:33:17.982 [2024-04-17 10:29:51.045333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.045543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.045572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.982 qpair failed and we were unable to recover it. 00:33:17.982 [2024-04-17 10:29:51.045841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.045999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.046029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.982 qpair failed and we were unable to recover it. 00:33:17.982 [2024-04-17 10:29:51.046172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.046288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.046299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.982 qpair failed and we were unable to recover it. 00:33:17.982 [2024-04-17 10:29:51.046436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.047309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.047329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.982 qpair failed and we were unable to recover it. 00:33:17.982 [2024-04-17 10:29:51.047527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.047733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.047765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.982 qpair failed and we were unable to recover it. 00:33:17.982 [2024-04-17 10:29:51.047915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.048545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.048563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.982 qpair failed and we were unable to recover it. 00:33:17.982 [2024-04-17 10:29:51.048696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.048994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.049004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.982 qpair failed and we were unable to recover it. 00:33:17.982 [2024-04-17 10:29:51.049662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.982 [2024-04-17 10:29:51.049855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.049867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.050127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.051244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.051262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.051591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.051696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.051707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.051968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.052288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.052319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.052473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.052625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.052666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.052907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.053191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.053221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.053368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.053588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.053618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.053862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.054163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.054193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.054366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.054596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.054626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.054910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.055145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.055175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.055402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.055605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.055635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.055806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.055946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.055976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.056271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.056401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.056431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.056592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.056826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.056858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.057011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.057174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.057203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.057431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.057637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.057677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.057827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.057948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.057958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.058200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.058438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.058468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.058678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.058879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.058909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.059142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.059417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.059446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.059661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.059816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.059826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.060028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.060134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.060163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.060330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.060600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.060629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.060857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.061025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.061055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.061345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.061638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.061688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.061998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.062198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.062228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.062404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.062631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.062678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.062889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.063124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.063154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.983 qpair failed and we were unable to recover it. 00:33:17.983 [2024-04-17 10:29:51.063367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.063572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.983 [2024-04-17 10:29:51.063602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.063839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.063988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.064019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.064178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.064335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.064345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.064523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.064688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.064720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.064860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.065009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.065039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.065213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.065328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.065338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.065560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.065766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.065797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.065941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.066103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.066113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.066368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.066468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.066478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.066663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.066819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.066850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.067007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.067155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.067186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.067415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.067599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.067629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.067797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.067940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.067969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.068117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.068320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.068349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.068506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.068666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.068697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.068845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.069051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.069081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.069219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.069370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.069399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.069615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.069775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.069805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.069949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.070140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.070149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.070258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.070435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.070466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.070767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.070899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.070929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.071105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.071254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.071284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.071496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.071709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.071741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.071895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.072062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.072072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.072255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.072415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.072445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.072659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.072872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.072902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.073052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.073196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.073232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.073420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.073596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.073626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.073779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.073997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.984 [2024-04-17 10:29:51.074027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.984 qpair failed and we were unable to recover it. 00:33:17.984 [2024-04-17 10:29:51.074172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.074473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.074503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.074665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.074890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.074920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.075066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.075266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.075297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.075438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.075574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.075605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.075828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.075996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.076025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.076238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.076439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.076470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.076604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.076813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.076844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.077064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.077263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.077301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.077477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.077597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.077628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.077844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.077988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.078018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.078231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.078396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.078405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.078504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.078692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.078723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.078934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.079049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.079079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.079208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.079365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.079375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.079523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.079685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.079717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.079925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.080077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.080106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.080247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.080396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.080426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.080574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.080716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.080747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.080884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.081155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.081185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.081423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.081673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.081704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.081874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.082114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.082149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.082291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.082546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.082576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.082744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.082974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.083004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.083237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.083340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.083371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.083583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.083720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.083750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.083965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.084092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.084121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.084341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.084477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.084508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.084714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.084848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.084878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.985 qpair failed and we were unable to recover it. 00:33:17.985 [2024-04-17 10:29:51.085098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.085298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.985 [2024-04-17 10:29:51.085328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.986 qpair failed and we were unable to recover it. 00:33:17.986 [2024-04-17 10:29:51.085550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.085699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.085730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.986 qpair failed and we were unable to recover it. 00:33:17.986 [2024-04-17 10:29:51.085886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.086049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.086083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.986 qpair failed and we were unable to recover it. 00:33:17.986 [2024-04-17 10:29:51.086311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.086452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.086481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.986 qpair failed and we were unable to recover it. 00:33:17.986 [2024-04-17 10:29:51.086668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.086783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.086794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.986 qpair failed and we were unable to recover it. 00:33:17.986 [2024-04-17 10:29:51.086889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.086980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.086991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.986 qpair failed and we were unable to recover it. 00:33:17.986 [2024-04-17 10:29:51.087184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.087299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.087309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.986 qpair failed and we were unable to recover it. 00:33:17.986 [2024-04-17 10:29:51.087426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.087589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.087619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.986 qpair failed and we were unable to recover it. 00:33:17.986 [2024-04-17 10:29:51.087875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.088019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.088049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.986 qpair failed and we were unable to recover it. 00:33:17.986 [2024-04-17 10:29:51.088306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.088538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.088568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.986 qpair failed and we were unable to recover it. 00:33:17.986 [2024-04-17 10:29:51.088727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.089002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.089032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.986 qpair failed and we were unable to recover it. 00:33:17.986 [2024-04-17 10:29:51.089177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.089368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.089397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.986 qpair failed and we were unable to recover it. 00:33:17.986 [2024-04-17 10:29:51.089622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.089765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.089800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.986 qpair failed and we were unable to recover it. 00:33:17.986 [2024-04-17 10:29:51.090031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.090196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.090226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.986 qpair failed and we were unable to recover it. 00:33:17.986 [2024-04-17 10:29:51.090424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.090611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.090621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.986 qpair failed and we were unable to recover it. 00:33:17.986 [2024-04-17 10:29:51.090716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.090808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.090819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.986 qpair failed and we were unable to recover it. 00:33:17.986 [2024-04-17 10:29:51.090950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.091112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.091123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.986 qpair failed and we were unable to recover it. 00:33:17.986 [2024-04-17 10:29:51.091352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.091447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.091458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.986 qpair failed and we were unable to recover it. 00:33:17.986 [2024-04-17 10:29:51.091631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.091868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.091879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.986 qpair failed and we were unable to recover it. 00:33:17.986 [2024-04-17 10:29:51.091976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.092092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.092103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.986 qpair failed and we were unable to recover it. 00:33:17.986 [2024-04-17 10:29:51.092272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.092518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.092548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.986 qpair failed and we were unable to recover it. 00:33:17.986 [2024-04-17 10:29:51.092697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.092846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.092876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.986 qpair failed and we were unable to recover it. 00:33:17.986 [2024-04-17 10:29:51.093035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.093164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.093175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.986 qpair failed and we were unable to recover it. 00:33:17.986 [2024-04-17 10:29:51.093385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.093601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.986 [2024-04-17 10:29:51.093631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.986 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.093857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.094035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.094045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.094219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.094401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.094411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.094575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.094692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.094703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.094825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.094920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.094930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.095040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.095212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.095222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.095388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.095495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.095505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.095696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.095799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.095809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.095910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.096018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.096029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.096122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.096283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.096294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.096498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.096593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.096603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.096724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.096827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.096838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.097025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.097256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.097267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.097364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.097456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.097466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.097576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.097680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.097691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.097787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.097951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.097961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.098055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.098154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.098164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.098348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.098448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.098459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.098654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.098756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.098767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.098930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.099023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.099033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.099132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.099304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.099314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.099403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.099586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.099596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.099756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.099920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.099930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.100098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.100208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.100219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.100320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.100438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.100450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.100617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.100794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.100805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.101036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.101196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.101206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.101315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.101569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.101579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.101758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.101857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.101868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.987 qpair failed and we were unable to recover it. 00:33:17.987 [2024-04-17 10:29:51.101963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.102053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.987 [2024-04-17 10:29:51.102064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.102232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.102392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.102402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.102503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.102609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.102619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.102705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.102796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.102805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.102981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.103074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.103085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.103187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.103311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.103322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.103552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.103654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.103665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.103787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.103881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.103892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.103996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.104106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.104117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.104210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.104415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.104425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.104606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.104700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.104710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.104810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.104971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.104981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.105092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.105199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.105209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.105386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.105578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.105588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.105687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.105777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.105788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.105925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.105990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.105999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.106229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.106423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.106434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.106585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.106769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.106779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.106942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.107042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.107053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.107238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.107466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.107476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.107637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.107822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.107834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.108014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.108176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.108186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.108347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.108626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.108637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.108821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.108993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.109003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.109244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.109341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.109351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.109455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.109566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.109577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.109746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.109846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.109856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.110039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.110220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.110231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.110407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.110578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.110589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.988 qpair failed and we were unable to recover it. 00:33:17.988 [2024-04-17 10:29:51.110751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.988 [2024-04-17 10:29:51.110917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.110928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.111184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.111305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.111315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.111506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.111762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.111773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.111885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.112072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.112082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.112198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.112364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.112375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.112534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.112651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.112662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.112774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.112946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.112956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.113149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.113310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.113321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.113419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.113653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.113664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.113858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.114019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.114030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.114229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.114403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.114414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.114521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.114703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.114713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.114897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.115072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.115083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.115279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.115378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.115389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.115486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.115669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.115680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.115925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.116105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.116115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.116208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.116389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.116399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.116573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.116771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.116782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.116962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.117206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.117217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.117415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.117577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.117587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.117790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.118023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.118033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.118224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.118489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.118500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.118700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.118861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.118872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.119068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.119174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.119185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.119354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.119615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.119626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.119866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.120055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.120065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.120321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.120434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.120445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.120541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.120655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.120666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.120833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.121087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.989 [2024-04-17 10:29:51.121098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.989 qpair failed and we were unable to recover it. 00:33:17.989 [2024-04-17 10:29:51.121282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.121532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.121542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.990 [2024-04-17 10:29:51.121705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.121799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.121810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.990 [2024-04-17 10:29:51.121974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.122071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.122080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.990 [2024-04-17 10:29:51.122186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.122367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.122378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.990 [2024-04-17 10:29:51.122627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.122741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.122752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.990 [2024-04-17 10:29:51.122864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.123121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.123132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.990 [2024-04-17 10:29:51.123253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.123444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.123455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.990 [2024-04-17 10:29:51.123655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.123931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.123941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.990 [2024-04-17 10:29:51.124199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.124319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.124330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.990 [2024-04-17 10:29:51.124523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.124714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.124726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.990 [2024-04-17 10:29:51.124896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.125057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.125068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.990 [2024-04-17 10:29:51.125179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.125281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.125292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.990 [2024-04-17 10:29:51.125484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.125606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.125616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.990 [2024-04-17 10:29:51.125846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.125957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.125968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.990 [2024-04-17 10:29:51.126074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.126168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.126178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.990 [2024-04-17 10:29:51.126339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.126513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.126524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.990 [2024-04-17 10:29:51.126707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.126886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.126896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.990 [2024-04-17 10:29:51.127084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.127292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.127303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.990 [2024-04-17 10:29:51.127488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.127572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.127582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.990 [2024-04-17 10:29:51.127815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.127925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.127936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.990 [2024-04-17 10:29:51.128106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.128197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.128208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.990 [2024-04-17 10:29:51.128384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.128493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.128503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.990 [2024-04-17 10:29:51.128761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.128926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.128937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 3655859 Killed "${NVMF_APP[@]}" "$@" 00:33:17.990 [2024-04-17 10:29:51.129167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.129329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.129340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.990 [2024-04-17 10:29:51.129443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.129550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.129561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.990 10:29:51 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:33:17.990 [2024-04-17 10:29:51.129687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.129916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.129927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.990 10:29:51 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:17.990 [2024-04-17 10:29:51.130099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.130276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.130287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 10:29:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.990 [2024-04-17 10:29:51.130392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 10:29:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:17.990 [2024-04-17 10:29:51.130652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.990 [2024-04-17 10:29:51.130664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.990 qpair failed and we were unable to recover it. 00:33:17.991 [2024-04-17 10:29:51.130838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 10:29:51 -- common/autotest_common.sh@10 -- # set +x 00:33:17.991 [2024-04-17 10:29:51.130923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.130934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.991 qpair failed and we were unable to recover it. 00:33:17.991 [2024-04-17 10:29:51.131128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.131233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.131245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.991 qpair failed and we were unable to recover it. 00:33:17.991 [2024-04-17 10:29:51.131426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.131658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.131670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.991 qpair failed and we were unable to recover it. 00:33:17.991 [2024-04-17 10:29:51.131848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.132022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.132032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.991 qpair failed and we were unable to recover it. 00:33:17.991 [2024-04-17 10:29:51.132297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.132496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.132507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.991 qpair failed and we were unable to recover it. 00:33:17.991 [2024-04-17 10:29:51.132681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.132782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.132793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.991 qpair failed and we were unable to recover it. 00:33:17.991 [2024-04-17 10:29:51.132971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.133133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.133144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.991 qpair failed and we were unable to recover it. 00:33:17.991 [2024-04-17 10:29:51.133261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.133367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.133378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.991 qpair failed and we were unable to recover it. 00:33:17.991 [2024-04-17 10:29:51.133554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.133720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.133731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.991 qpair failed and we were unable to recover it. 00:33:17.991 [2024-04-17 10:29:51.133910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.134141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.134152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.991 qpair failed and we were unable to recover it. 00:33:17.991 [2024-04-17 10:29:51.134411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.134659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.134670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.991 qpair failed and we were unable to recover it. 00:33:17.991 [2024-04-17 10:29:51.134833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.135022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.135033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.991 qpair failed and we were unable to recover it. 00:33:17.991 [2024-04-17 10:29:51.135194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.135411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.135422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.991 qpair failed and we were unable to recover it. 00:33:17.991 [2024-04-17 10:29:51.135549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.135725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.135735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.991 qpair failed and we were unable to recover it. 00:33:17.991 [2024-04-17 10:29:51.135848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.135963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.135973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.991 qpair failed and we were unable to recover it. 00:33:17.991 [2024-04-17 10:29:51.136079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.136294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.136305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.991 qpair failed and we were unable to recover it. 00:33:17.991 [2024-04-17 10:29:51.136488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.136635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.136652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.991 qpair failed and we were unable to recover it. 00:33:17.991 [2024-04-17 10:29:51.136828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 10:29:51 -- nvmf/common.sh@469 -- # nvmfpid=3656818 00:33:17.991 [2024-04-17 10:29:51.136952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.136963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.991 qpair failed and we were unable to recover it. 00:33:17.991 [2024-04-17 10:29:51.137058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 10:29:51 -- nvmf/common.sh@470 -- # waitforlisten 3656818 00:33:17.991 [2024-04-17 10:29:51.137164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.137175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.991 qpair failed and we were unable to recover it. 00:33:17.991 10:29:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:17.991 [2024-04-17 10:29:51.137355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.137483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.137495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.991 qpair failed and we were unable to recover it. 00:33:17.991 10:29:51 -- common/autotest_common.sh@819 -- # '[' -z 3656818 ']' 00:33:17.991 [2024-04-17 10:29:51.137675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.137795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.137807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.991 qpair failed and we were unable to recover it. 00:33:17.991 10:29:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:17.991 [2024-04-17 10:29:51.137913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 10:29:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:17.991 [2024-04-17 10:29:51.138157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 [2024-04-17 10:29:51.138168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.991 qpair failed and we were unable to recover it. 00:33:17.991 [2024-04-17 10:29:51.138360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.991 10:29:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:17.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:17.992 [2024-04-17 10:29:51.138543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.138556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 10:29:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:17.992 [2024-04-17 10:29:51.138682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.138807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.138822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 10:29:51 -- common/autotest_common.sh@10 -- # set +x 00:33:17.992 [2024-04-17 10:29:51.139075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.139284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.139295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 [2024-04-17 10:29:51.139476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.139723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.139736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 [2024-04-17 10:29:51.139944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.140126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.140136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 [2024-04-17 10:29:51.140256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.140530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.140542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 [2024-04-17 10:29:51.140690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.140881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.140893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 [2024-04-17 10:29:51.141046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.141225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.141237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 [2024-04-17 10:29:51.141334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.141526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.141538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 [2024-04-17 10:29:51.141658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.141793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.141804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 [2024-04-17 10:29:51.141969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.142102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.142114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 [2024-04-17 10:29:51.142267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.142435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.142446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 [2024-04-17 10:29:51.142678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.142842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.142853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 [2024-04-17 10:29:51.143091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.143203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.143214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 [2024-04-17 10:29:51.143432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.143628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.143638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 [2024-04-17 10:29:51.143826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.144058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.144069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 [2024-04-17 10:29:51.144263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.144428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.144439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 [2024-04-17 10:29:51.144567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.144686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.144697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 [2024-04-17 10:29:51.144811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.144993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.145003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 [2024-04-17 10:29:51.145121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.145305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.145315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 [2024-04-17 10:29:51.145442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.145562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.145574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 [2024-04-17 10:29:51.145759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.145946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.145956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 [2024-04-17 10:29:51.146125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.146358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.146368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 [2024-04-17 10:29:51.146532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.146764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.146774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 [2024-04-17 10:29:51.146880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.147049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.147059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 [2024-04-17 10:29:51.147228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.147361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.147373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 [2024-04-17 10:29:51.147551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.147726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.147737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 [2024-04-17 10:29:51.147904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.148012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.148023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.992 qpair failed and we were unable to recover it. 00:33:17.992 [2024-04-17 10:29:51.148193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.992 [2024-04-17 10:29:51.148327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.148337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.148443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.148615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.148636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.148826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.148996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.149008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.149170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.149336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.149347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.149524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.149640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.149656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.149839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.149947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.149957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.150217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.150314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.150325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.150515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.150676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.150687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.150813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.150938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.150948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.151112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.151324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.151335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.151456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.151565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.151576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.151737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.151916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.151927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.152056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.152242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.152255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.152501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.152674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.152685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.152851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.152987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.152997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.153121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.153362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.153373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.153551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.153784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.153795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.153965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.154136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.154147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.154316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.154426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.154438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.154610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.154842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.154853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.155058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.155262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.155272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.155443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.155613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.155623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.155813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.155983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.155996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.156156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.156252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.156263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.156368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.156599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.156610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.156877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.157157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.157167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.157297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.157480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.157490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.157672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.157853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.157874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.993 [2024-04-17 10:29:51.157993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.158101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.993 [2024-04-17 10:29:51.158111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.993 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.158257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.158362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.158373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.158655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.158818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.158831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.158948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.159051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.159065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.159237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.159413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.159433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.159636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.159790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.159802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.159926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.160095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.160107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.160208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.160387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.160401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.160524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.160629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.160639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.160813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.160993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.161003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.161106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.161194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.161205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.161314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.161565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.161575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.161681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.161785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.161796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.161961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.162153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.162163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.162268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.162456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.162466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.162572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.162735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.162747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.162843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.163043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.163056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.163235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.163401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.163413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.163608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.163786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.163797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.163902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.164102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.164113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.164374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.164486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.164497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.164607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.164837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.164847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.164973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.165056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.165068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.165265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.165532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.165542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.165706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.165869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.165881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.166065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.166312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.166323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.166482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.166584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.166595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.166691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.166894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.166905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.167129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.167230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.167240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.994 qpair failed and we were unable to recover it. 00:33:17.994 [2024-04-17 10:29:51.167469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.994 [2024-04-17 10:29:51.167596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.167609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.167687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.167853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.167864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.168120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.168299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.168310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.168507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.168604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.168615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.168793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.168899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.168910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.169022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.169193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.169204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.169325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.169512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.169523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.169697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.169808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.169818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.169923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.170092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.170104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.170232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.170337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.170348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.170444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.170552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.170563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.170677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.170751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.170763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.170943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.171119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.171130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.171395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.171513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.171525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.171649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.171824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.171835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.172015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.172184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.172195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.172393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.172516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.172527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.172695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.172944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.172954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.173086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.173269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.173280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.173518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.173698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.173710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.173820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.173939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.173950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.174183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.174419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.174430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.174617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.174723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.174735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.174906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.175030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.175041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.175214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.175397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.175409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.175521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.175753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.175764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.175871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.176037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.176048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.176161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.176266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.176277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.176396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.176563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.995 [2024-04-17 10:29:51.176574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.995 qpair failed and we were unable to recover it. 00:33:17.995 [2024-04-17 10:29:51.176684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.176854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.176865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.996 qpair failed and we were unable to recover it. 00:33:17.996 [2024-04-17 10:29:51.177108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.177298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.177310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.996 qpair failed and we were unable to recover it. 00:33:17.996 [2024-04-17 10:29:51.177456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.177601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.177613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.996 qpair failed and we were unable to recover it. 00:33:17.996 [2024-04-17 10:29:51.177843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.178048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.178059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.996 qpair failed and we were unable to recover it. 00:33:17.996 [2024-04-17 10:29:51.178167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.178281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.178293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.996 qpair failed and we were unable to recover it. 00:33:17.996 [2024-04-17 10:29:51.178467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.178570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.178580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.996 qpair failed and we were unable to recover it. 00:33:17.996 [2024-04-17 10:29:51.178740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.178860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.178871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.996 qpair failed and we were unable to recover it. 00:33:17.996 [2024-04-17 10:29:51.179059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.179130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.179141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.996 qpair failed and we were unable to recover it. 00:33:17.996 [2024-04-17 10:29:51.179313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.179494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.179505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.996 qpair failed and we were unable to recover it. 00:33:17.996 [2024-04-17 10:29:51.179609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.179710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.179722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.996 qpair failed and we were unable to recover it. 00:33:17.996 [2024-04-17 10:29:51.179904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.180026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.180037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.996 qpair failed and we were unable to recover it. 00:33:17.996 [2024-04-17 10:29:51.180157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.180319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.180330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.996 qpair failed and we were unable to recover it. 00:33:17.996 [2024-04-17 10:29:51.180497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.180602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.180613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.996 qpair failed and we were unable to recover it. 00:33:17.996 [2024-04-17 10:29:51.180721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.180889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.180900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.996 qpair failed and we were unable to recover it. 00:33:17.996 [2024-04-17 10:29:51.181015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.181094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.181105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.996 qpair failed and we were unable to recover it. 00:33:17.996 [2024-04-17 10:29:51.181296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.181406] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:17.996 [2024-04-17 10:29:51.181467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.181466] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:17.996 [2024-04-17 10:29:51.181479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.996 qpair failed and we were unable to recover it. 00:33:17.996 [2024-04-17 10:29:51.181589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.181694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.181703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.996 qpair failed and we were unable to recover it. 00:33:17.996 [2024-04-17 10:29:51.181865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.181979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.181989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.996 qpair failed and we were unable to recover it. 00:33:17.996 [2024-04-17 10:29:51.182163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.182339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.182348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.996 qpair failed and we were unable to recover it. 00:33:17.996 [2024-04-17 10:29:51.182523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.182663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.182674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.996 qpair failed and we were unable to recover it. 00:33:17.996 [2024-04-17 10:29:51.182907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.183008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.183020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.996 qpair failed and we were unable to recover it. 00:33:17.996 [2024-04-17 10:29:51.183145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.183318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.183329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.996 qpair failed and we were unable to recover it. 00:33:17.996 [2024-04-17 10:29:51.183453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.183624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.183634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.996 qpair failed and we were unable to recover it. 00:33:17.996 [2024-04-17 10:29:51.183804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.996 [2024-04-17 10:29:51.183978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.183989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.184095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.184325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.184336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.184451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.184685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.184696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.184867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.184967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.184979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.185082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.185243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.185254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.185354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.185567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.185579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.185784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.186016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.186027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.186259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.186373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.186384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.186583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.186774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.186786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.186997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.187173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.187184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.187360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.187526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.187536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.187621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.187735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.187747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.187912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.188043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.188056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.188216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.188405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.188417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.188626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.188815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.188827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.189007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.189179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.189191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.189288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.189400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.189411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.189686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.189917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.189928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.190088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.190270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.190280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.190486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.190603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.190614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.190817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.191003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.191014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.191198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.191370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.191381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.191637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.191754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.191766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.191874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.192075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.192088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.192266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.192427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.192437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.192560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.192735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.192748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.192848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.193079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.193089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.193280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.193380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.193392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.193624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.193888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.997 [2024-04-17 10:29:51.193899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.997 qpair failed and we were unable to recover it. 00:33:17.997 [2024-04-17 10:29:51.194068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.194172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.194182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.194281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.194451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.194462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.194589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.194826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.194837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.195017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.195265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.195275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.195471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.195708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.195721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.195898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.196004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.196015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.196139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.196320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.196331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.196496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.196729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.196741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.196904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.197148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.197159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.197320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.197425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.197436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.197618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.197781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.197792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.197997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.198177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.198188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.198446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.198713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.198723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.198829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.199019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.199029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.199194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.199298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.199311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.199493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.199591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.199601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.199777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.199873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.199884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.199990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.200248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.200258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.200442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.200579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.200606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.200716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.200902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.200913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.201134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.201247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.201258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.201462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.201649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.201659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.201822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.201942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.201953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.202231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.202343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.202354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.202452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.202549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.202560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.202759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.202926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.202938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.203137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.203344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.203355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.203568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.203830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.203842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.998 qpair failed and we were unable to recover it. 00:33:17.998 [2024-04-17 10:29:51.204048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.204273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.998 [2024-04-17 10:29:51.204284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.204397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.204664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.204676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.204808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.204986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.204997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.205188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.205368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.205379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.205550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.205656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.205667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.205901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.206021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.206033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.206302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.206413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.206424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.206663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.206828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.206840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.207074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.207256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.207267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.207452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.207618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.207629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.207759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.207941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.207953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.208133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.208314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.208326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.208443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.208617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.208627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.208746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.208863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.208873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.209059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.209315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.209326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.209498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.209737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.209748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.209863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.210037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.210048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.210237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.210341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.210351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.210562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.210800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.210813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.210992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.211170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.211181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.211301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.211382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.211393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.211515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.211748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.211759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.211901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.212081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.212091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.212209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.212415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.212426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.212620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.212802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.212814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.212917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.213078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.213088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.213270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.213501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.213511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.213678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.213851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.213862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.214056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.214216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.214225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:17.999 qpair failed and we were unable to recover it. 00:33:17.999 [2024-04-17 10:29:51.214394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.999 [2024-04-17 10:29:51.214570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.214579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.000 qpair failed and we were unable to recover it. 00:33:18.000 [2024-04-17 10:29:51.214757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.214991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.215002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.000 qpair failed and we were unable to recover it. 00:33:18.000 [2024-04-17 10:29:51.215106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.215303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.215314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.000 qpair failed and we were unable to recover it. 00:33:18.000 [2024-04-17 10:29:51.215492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.215594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.215604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.000 qpair failed and we were unable to recover it. 00:33:18.000 [2024-04-17 10:29:51.215782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.215953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.215963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.000 qpair failed and we were unable to recover it. 00:33:18.000 [2024-04-17 10:29:51.216117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.216412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.216424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.000 qpair failed and we were unable to recover it. 00:33:18.000 [2024-04-17 10:29:51.216527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.216688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.216700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.000 qpair failed and we were unable to recover it. 00:33:18.000 [2024-04-17 10:29:51.216870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.217070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.217082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.000 qpair failed and we were unable to recover it. 00:33:18.000 [2024-04-17 10:29:51.217203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.217316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.217327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.000 qpair failed and we were unable to recover it. 00:33:18.000 [2024-04-17 10:29:51.217439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.217599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.217610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.000 qpair failed and we were unable to recover it. 00:33:18.000 [2024-04-17 10:29:51.217782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.217892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.217903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.000 qpair failed and we were unable to recover it. 00:33:18.000 [2024-04-17 10:29:51.218002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.218119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.218130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.000 qpair failed and we were unable to recover it. 00:33:18.000 [2024-04-17 10:29:51.218227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.218330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.218341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.000 qpair failed and we were unable to recover it. 00:33:18.000 [2024-04-17 10:29:51.218646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 EAL: No free 2048 kB hugepages reported on node 1 00:33:18.000 [2024-04-17 10:29:51.218813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.218824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.000 qpair failed and we were unable to recover it. 00:33:18.000 [2024-04-17 10:29:51.219074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.219203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.219215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.000 qpair failed and we were unable to recover it. 00:33:18.000 [2024-04-17 10:29:51.219407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.219577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.219588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.000 qpair failed and we were unable to recover it. 00:33:18.000 [2024-04-17 10:29:51.219706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.219830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.219843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.000 qpair failed and we were unable to recover it. 00:33:18.000 [2024-04-17 10:29:51.220017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.220194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.220205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.000 qpair failed and we were unable to recover it. 00:33:18.000 [2024-04-17 10:29:51.220403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.220583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.220593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.000 qpair failed and we were unable to recover it. 00:33:18.000 [2024-04-17 10:29:51.220725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.220874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.220884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.000 qpair failed and we were unable to recover it. 00:33:18.000 [2024-04-17 10:29:51.221087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.221347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.221359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.000 qpair failed and we were unable to recover it. 00:33:18.000 [2024-04-17 10:29:51.221544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.221654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.221664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.000 qpair failed and we were unable to recover it. 00:33:18.000 [2024-04-17 10:29:51.221924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.222118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.222129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.000 qpair failed and we were unable to recover it. 00:33:18.000 [2024-04-17 10:29:51.222221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.222398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.222408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.000 qpair failed and we were unable to recover it. 00:33:18.000 [2024-04-17 10:29:51.222602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.000 [2024-04-17 10:29:51.222784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.222796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.222969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.223084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.223095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.223200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.223467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.223478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.223573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.223738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.223749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.223929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.224159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.224170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.224302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.224387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.224397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.224630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.224767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.224778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.224955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.225062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.225074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.225254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.225384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.225395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.225516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.225771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.225782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.225955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.226138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.226150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.226324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.226456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.226467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.226702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.226964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.226975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.227082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.227241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.227252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.227422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.227529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.227542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.227722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.227843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.227854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.228078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.228227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.228237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.228342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.228422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.228432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.228555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.228647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.228658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.228772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.228867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.228878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.229051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.229146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.229157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.229325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.229544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.229555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.229686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.229855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.229867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.230098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.230196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.230206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.230317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.230574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.230585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.230708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.230825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.230836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.231082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.231258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.231269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.231436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.231612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.231623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.231874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.232066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.232077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.232249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.232389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.232401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.001 qpair failed and we were unable to recover it. 00:33:18.001 [2024-04-17 10:29:51.232597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.232705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.001 [2024-04-17 10:29:51.232717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.232815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.232994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.233006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.233169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.233275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.233286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.233462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.233650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.233662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.233917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.234034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.234044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.234160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.234420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.234430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.234612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.234791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.234802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.235052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.235262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.235275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.235393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.235513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.235524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.235762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.235999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.236012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.236189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.236305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.236317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.236483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.236652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.236663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.236842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.236965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.236977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.237152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.237310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.237321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.237503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.237626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.237638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.237808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.237981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.237993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.238190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.238354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.238365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.238458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.238617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.238629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.238816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.239077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.239089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.239259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.239360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.239373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.239543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.239802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.239814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.239992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.240261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.240273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.240457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.240639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.240655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.240839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.241021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.241032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.241147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.241329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.241341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.241456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.241553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.241564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.241800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.241878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.241889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.242022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.242194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.242205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.242303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.242500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.242511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.242624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.242858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.242869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.243046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.243229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.243240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.002 [2024-04-17 10:29:51.243357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.243468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.002 [2024-04-17 10:29:51.243480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.002 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.243609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.243863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.243874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.243977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.244092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.244103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.244344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.244596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.244606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.244725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.244901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.244910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.245072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.245178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.245187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.245392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.245603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.245613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.245815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.246043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.246053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.246242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.246507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.246515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.246624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.246744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.246753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.246879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.247058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.247067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.247143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.247254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.247263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.247371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.247469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.247478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.247605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.247764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.247774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.247889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.247984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.247993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.248256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.248487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.248497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.248682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.248790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.248799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.248971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.249085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.249094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.249182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.249290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.249299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.249537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.249651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.249661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.249824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.249984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.249993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.250229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.250342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.250361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.250575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.250687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.250697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.250803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.250983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.250994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.251238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.251477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.251486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.251686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.251866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.251875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.251984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.252096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.252105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.252264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.252402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.252411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.252612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.252837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.252847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.253021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.253149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.253158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.003 [2024-04-17 10:29:51.253330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.253437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.003 [2024-04-17 10:29:51.253447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.003 qpair failed and we were unable to recover it. 00:33:18.004 [2024-04-17 10:29:51.253555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.253727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.253737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.004 qpair failed and we were unable to recover it. 00:33:18.004 [2024-04-17 10:29:51.253843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.254024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.254033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.004 qpair failed and we were unable to recover it. 00:33:18.004 [2024-04-17 10:29:51.254201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.254314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.254325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.004 qpair failed and we were unable to recover it. 00:33:18.004 [2024-04-17 10:29:51.254432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.254714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.254724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.004 qpair failed and we were unable to recover it. 00:33:18.004 [2024-04-17 10:29:51.254837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.254994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.255003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.004 qpair failed and we were unable to recover it. 00:33:18.004 [2024-04-17 10:29:51.255178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.255295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.255305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.004 qpair failed and we were unable to recover it. 00:33:18.004 [2024-04-17 10:29:51.255484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.255563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.255571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.004 qpair failed and we were unable to recover it. 00:33:18.004 [2024-04-17 10:29:51.255797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.255954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.255963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.004 qpair failed and we were unable to recover it. 00:33:18.004 [2024-04-17 10:29:51.256131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.256244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.256253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.004 qpair failed and we were unable to recover it. 00:33:18.004 [2024-04-17 10:29:51.256354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.256506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.256515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.004 qpair failed and we were unable to recover it. 00:33:18.004 [2024-04-17 10:29:51.256635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.256748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.256757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.004 qpair failed and we were unable to recover it. 00:33:18.004 [2024-04-17 10:29:51.256886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.257051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.257060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.004 qpair failed and we were unable to recover it. 00:33:18.004 [2024-04-17 10:29:51.257221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.257319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.257329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.004 qpair failed and we were unable to recover it. 00:33:18.004 [2024-04-17 10:29:51.257475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.257570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.257579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.004 qpair failed and we were unable to recover it. 00:33:18.004 [2024-04-17 10:29:51.257810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.257902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.257911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.004 qpair failed and we were unable to recover it. 00:33:18.004 [2024-04-17 10:29:51.258078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.258239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.258248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.004 qpair failed and we were unable to recover it. 00:33:18.004 [2024-04-17 10:29:51.258403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.258581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.258589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.004 qpair failed and we were unable to recover it. 00:33:18.004 [2024-04-17 10:29:51.258818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.258990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.258999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.004 qpair failed and we were unable to recover it. 00:33:18.004 [2024-04-17 10:29:51.259164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.259323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.259331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.004 qpair failed and we were unable to recover it. 00:33:18.004 [2024-04-17 10:29:51.259576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.259766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.259775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.004 qpair failed and we were unable to recover it. 00:33:18.004 [2024-04-17 10:29:51.259937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.260165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.260174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.004 qpair failed and we were unable to recover it. 00:33:18.004 [2024-04-17 10:29:51.260353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.260595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.260604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.004 qpair failed and we were unable to recover it. 00:33:18.004 [2024-04-17 10:29:51.260859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.260968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.260980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.004 qpair failed and we were unable to recover it. 00:33:18.004 [2024-04-17 10:29:51.261166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.261346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.261355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.004 qpair failed and we were unable to recover it. 00:33:18.004 [2024-04-17 10:29:51.261453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.261685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.261694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.004 qpair failed and we were unable to recover it. 00:33:18.004 [2024-04-17 10:29:51.261855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.261974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.004 [2024-04-17 10:29:51.261984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.004 qpair failed and we were unable to recover it. 00:33:18.004 [2024-04-17 10:29:51.262157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.262271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.262280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.005 qpair failed and we were unable to recover it. 00:33:18.005 [2024-04-17 10:29:51.262465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.262562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.262571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.005 qpair failed and we were unable to recover it. 00:33:18.005 [2024-04-17 10:29:51.262673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.262786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.262795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.005 qpair failed and we were unable to recover it. 00:33:18.005 [2024-04-17 10:29:51.262899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.262992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.263001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.005 qpair failed and we were unable to recover it. 00:33:18.005 [2024-04-17 10:29:51.263229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.263329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.263339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.005 qpair failed and we were unable to recover it. 00:33:18.005 [2024-04-17 10:29:51.263538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.263659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.263669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.005 qpair failed and we were unable to recover it. 00:33:18.005 [2024-04-17 10:29:51.263864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.263993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.264004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.005 qpair failed and we were unable to recover it. 00:33:18.005 [2024-04-17 10:29:51.264208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.264317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.264327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.005 qpair failed and we were unable to recover it. 00:33:18.005 [2024-04-17 10:29:51.264559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.264743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.264752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.005 qpair failed and we were unable to recover it. 00:33:18.005 [2024-04-17 10:29:51.264863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.264967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.264976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.005 qpair failed and we were unable to recover it. 00:33:18.005 [2024-04-17 10:29:51.265167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.265167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:18.005 [2024-04-17 10:29:51.265334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.265344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.005 qpair failed and we were unable to recover it. 00:33:18.005 [2024-04-17 10:29:51.265519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.265749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.265760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.005 qpair failed and we were unable to recover it. 00:33:18.005 [2024-04-17 10:29:51.265865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.266036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.266046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.005 qpair failed and we were unable to recover it. 00:33:18.005 [2024-04-17 10:29:51.266208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.266336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.266346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.005 qpair failed and we were unable to recover it. 00:33:18.005 [2024-04-17 10:29:51.266458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.266652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.266661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.005 qpair failed and we were unable to recover it. 00:33:18.005 [2024-04-17 10:29:51.266769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.266935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.266946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.005 qpair failed and we were unable to recover it. 00:33:18.005 [2024-04-17 10:29:51.267142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.267315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.267327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.005 qpair failed and we were unable to recover it. 00:33:18.005 [2024-04-17 10:29:51.267418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.267542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.267551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.005 qpair failed and we were unable to recover it. 00:33:18.005 [2024-04-17 10:29:51.267736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.267833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.267843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.005 qpair failed and we were unable to recover it. 00:33:18.005 [2024-04-17 10:29:51.268018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.268178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.268187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.005 qpair failed and we were unable to recover it. 00:33:18.005 [2024-04-17 10:29:51.268370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.268531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.268540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.005 qpair failed and we were unable to recover it. 00:33:18.005 [2024-04-17 10:29:51.268704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.268863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.268872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.005 qpair failed and we were unable to recover it. 00:33:18.005 [2024-04-17 10:29:51.269045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.269234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.269244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.005 qpair failed and we were unable to recover it. 00:33:18.005 [2024-04-17 10:29:51.269409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.269536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.269545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.005 qpair failed and we were unable to recover it. 00:33:18.005 [2024-04-17 10:29:51.269809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.270078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.270087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.005 qpair failed and we were unable to recover it. 00:33:18.005 [2024-04-17 10:29:51.270296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.270409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.270418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.005 qpair failed and we were unable to recover it. 00:33:18.005 [2024-04-17 10:29:51.270596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.270766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.270778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.005 qpair failed and we were unable to recover it. 00:33:18.005 [2024-04-17 10:29:51.270874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.271128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.005 [2024-04-17 10:29:51.271137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.006 qpair failed and we were unable to recover it. 00:33:18.006 [2024-04-17 10:29:51.271322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.271491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.271501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.006 qpair failed and we were unable to recover it. 00:33:18.006 [2024-04-17 10:29:51.271743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.271999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.272010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.006 qpair failed and we were unable to recover it. 00:33:18.006 [2024-04-17 10:29:51.272102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.272370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.272380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.006 qpair failed and we were unable to recover it. 00:33:18.006 [2024-04-17 10:29:51.272491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.272601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.272611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.006 qpair failed and we were unable to recover it. 00:33:18.006 [2024-04-17 10:29:51.272876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.273011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.273021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.006 qpair failed and we were unable to recover it. 00:33:18.006 [2024-04-17 10:29:51.273181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.273270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.273280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.006 qpair failed and we were unable to recover it. 00:33:18.006 [2024-04-17 10:29:51.273460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.273744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.273756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.006 qpair failed and we were unable to recover it. 00:33:18.006 [2024-04-17 10:29:51.273873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.274119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.274129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.006 qpair failed and we were unable to recover it. 00:33:18.006 [2024-04-17 10:29:51.274240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.274326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.274336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.006 qpair failed and we were unable to recover it. 00:33:18.006 [2024-04-17 10:29:51.274568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.274680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.274690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.006 qpair failed and we were unable to recover it. 00:33:18.006 [2024-04-17 10:29:51.274789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.275069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.275081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.006 qpair failed and we were unable to recover it. 00:33:18.006 [2024-04-17 10:29:51.275271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.275507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.275519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.006 qpair failed and we were unable to recover it. 00:33:18.006 [2024-04-17 10:29:51.275753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.275930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.275940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.006 qpair failed and we were unable to recover it. 00:33:18.006 [2024-04-17 10:29:51.276128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.276350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.276360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.006 qpair failed and we were unable to recover it. 00:33:18.006 [2024-04-17 10:29:51.276462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.276573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.276583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.006 qpair failed and we were unable to recover it. 00:33:18.006 [2024-04-17 10:29:51.276816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.276935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.276944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.006 qpair failed and we were unable to recover it. 00:33:18.006 [2024-04-17 10:29:51.277038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.277169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.277179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.006 qpair failed and we were unable to recover it. 00:33:18.006 [2024-04-17 10:29:51.277262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.277522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.277532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.006 qpair failed and we were unable to recover it. 00:33:18.006 [2024-04-17 10:29:51.277726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.277906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.277916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.006 qpair failed and we were unable to recover it. 00:33:18.006 [2024-04-17 10:29:51.278154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.278346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.278355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.006 qpair failed and we were unable to recover it. 00:33:18.006 [2024-04-17 10:29:51.278464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.278625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.278635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.006 qpair failed and we were unable to recover it. 00:33:18.006 [2024-04-17 10:29:51.278837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.279091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.279100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.006 qpair failed and we were unable to recover it. 00:33:18.006 [2024-04-17 10:29:51.279206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.279369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.279379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.006 qpair failed and we were unable to recover it. 00:33:18.006 [2024-04-17 10:29:51.279488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.279598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.279608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.006 qpair failed and we were unable to recover it. 00:33:18.006 [2024-04-17 10:29:51.279708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.279830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.279841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.006 qpair failed and we were unable to recover it. 00:33:18.006 [2024-04-17 10:29:51.280071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.280246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.280256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.006 qpair failed and we were unable to recover it. 00:33:18.006 [2024-04-17 10:29:51.280440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.006 [2024-04-17 10:29:51.280604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.280613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.280796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.280995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.281004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.281259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.281360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.281370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.281478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.281642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.281656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.281915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.282037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.282046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.282235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.282341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.282351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.282475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.282653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.282662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.282842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.283027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.283036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.283203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.283375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.283384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.283549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.283713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.283723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.283959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.284156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.284166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.284349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.284554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.284564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.284679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.284874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.284884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.285144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.285317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.285328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.285535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.285710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.285721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.285830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.285950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.285959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.286209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.286320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.286330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.286438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.286549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.286559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.286656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.286818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.286828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.287049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.287174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.287184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.287288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.287400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.287412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.287534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.287655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.287667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.287826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.288085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.288096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.288268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.288386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.288397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.288500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.288686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.288697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.288878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.288989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.288999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.289178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.289361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.289371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.289538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.289625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.289635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.289758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.289924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.289935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.290034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.290302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.290311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.290523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.290695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.290706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.290870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.290975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.290986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.291106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.291305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.291315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.007 qpair failed and we were unable to recover it. 00:33:18.007 [2024-04-17 10:29:51.291433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.291541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.007 [2024-04-17 10:29:51.291550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.008 qpair failed and we were unable to recover it. 00:33:18.008 [2024-04-17 10:29:51.291669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.008 [2024-04-17 10:29:51.291769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.008 [2024-04-17 10:29:51.291779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.008 qpair failed and we were unable to recover it. 00:33:18.008 [2024-04-17 10:29:51.291942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.008 [2024-04-17 10:29:51.292109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.008 [2024-04-17 10:29:51.292119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.284 qpair failed and we were unable to recover it. 00:33:18.284 [2024-04-17 10:29:51.292242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.284 [2024-04-17 10:29:51.292420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.284 [2024-04-17 10:29:51.292431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.284 qpair failed and we were unable to recover it. 00:33:18.284 [2024-04-17 10:29:51.292522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.284 [2024-04-17 10:29:51.292706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.284 [2024-04-17 10:29:51.292717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.284 qpair failed and we were unable to recover it. 00:33:18.284 [2024-04-17 10:29:51.292882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.284 [2024-04-17 10:29:51.292996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.284 [2024-04-17 10:29:51.293007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.284 qpair failed and we were unable to recover it. 00:33:18.284 [2024-04-17 10:29:51.293174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.284 [2024-04-17 10:29:51.293265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.293275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.285 qpair failed and we were unable to recover it. 00:33:18.285 [2024-04-17 10:29:51.293450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.293558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.293568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.285 qpair failed and we were unable to recover it. 00:33:18.285 [2024-04-17 10:29:51.293748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.293876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.293886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.285 qpair failed and we were unable to recover it. 00:33:18.285 [2024-04-17 10:29:51.294046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.294173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.294182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.285 qpair failed and we were unable to recover it. 00:33:18.285 [2024-04-17 10:29:51.294301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.294412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.294421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.285 qpair failed and we were unable to recover it. 00:33:18.285 [2024-04-17 10:29:51.294582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.294683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.294693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.285 qpair failed and we were unable to recover it. 00:33:18.285 [2024-04-17 10:29:51.294786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.294916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.294926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.285 qpair failed and we were unable to recover it. 00:33:18.285 [2024-04-17 10:29:51.295024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.295127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.295136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.285 qpair failed and we were unable to recover it. 00:33:18.285 [2024-04-17 10:29:51.295226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.295314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.295324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.285 qpair failed and we were unable to recover it. 00:33:18.285 [2024-04-17 10:29:51.295419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.295538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.295547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.285 qpair failed and we were unable to recover it. 00:33:18.285 [2024-04-17 10:29:51.295716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.295923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.295933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.285 qpair failed and we were unable to recover it. 00:33:18.285 [2024-04-17 10:29:51.296120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.296251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.296260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.285 qpair failed and we were unable to recover it. 00:33:18.285 [2024-04-17 10:29:51.296350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.296451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.296461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.285 qpair failed and we were unable to recover it. 00:33:18.285 [2024-04-17 10:29:51.296652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.296773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.296783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.285 qpair failed and we were unable to recover it. 00:33:18.285 [2024-04-17 10:29:51.296888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.296981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.296990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.285 qpair failed and we were unable to recover it. 00:33:18.285 [2024-04-17 10:29:51.297099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.297203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.297213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.285 qpair failed and we were unable to recover it. 00:33:18.285 [2024-04-17 10:29:51.297342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.297513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.297523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.285 qpair failed and we were unable to recover it. 00:33:18.285 [2024-04-17 10:29:51.297623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.297764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.297775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.285 qpair failed and we were unable to recover it. 00:33:18.285 [2024-04-17 10:29:51.297877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.298075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.298085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.285 qpair failed and we were unable to recover it. 00:33:18.285 [2024-04-17 10:29:51.298260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.298363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.298372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.285 qpair failed and we were unable to recover it. 00:33:18.285 [2024-04-17 10:29:51.298610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.298704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.298714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.285 qpair failed and we were unable to recover it. 00:33:18.285 [2024-04-17 10:29:51.298951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.299182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.299191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.285 qpair failed and we were unable to recover it. 00:33:18.285 [2024-04-17 10:29:51.299361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.299456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.299465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.285 qpair failed and we were unable to recover it. 00:33:18.285 [2024-04-17 10:29:51.299628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.299743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.299752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.285 qpair failed and we were unable to recover it. 00:33:18.285 [2024-04-17 10:29:51.299920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.300044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.300054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.285 qpair failed and we were unable to recover it. 00:33:18.285 [2024-04-17 10:29:51.300149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.285 [2024-04-17 10:29:51.300314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.300323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.300427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.300593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.300602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.300705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.300892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.300901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.301000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.301091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.301100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.301205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.301313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.301322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.301399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.301496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.301505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.301598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.301700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.301710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.301809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.301918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.301928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.302037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.302149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.302157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.302286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.302467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.302476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.302573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.302688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.302698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.302792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.302889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.302898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.303004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.303085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.303094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.303203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.303293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.303302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.303399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.303673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.303682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.303790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.303967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.303976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.304072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.304250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.304260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.304373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.304532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.304542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.304635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.304806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.304815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.304916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.305117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.305126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.305225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.305328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.305338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.305594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.305757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.305766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.305936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.306109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.306118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.306209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.306315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.306325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.306430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.306515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.306524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.306687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.306779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.306789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.306948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.307121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.307130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.307235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.307334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.286 [2024-04-17 10:29:51.307343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.286 qpair failed and we were unable to recover it. 00:33:18.286 [2024-04-17 10:29:51.307434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.307521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.307531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.287 qpair failed and we were unable to recover it. 00:33:18.287 [2024-04-17 10:29:51.307718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.307834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.307844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.287 qpair failed and we were unable to recover it. 00:33:18.287 [2024-04-17 10:29:51.307935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.308107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.308116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.287 qpair failed and we were unable to recover it. 00:33:18.287 [2024-04-17 10:29:51.308283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.308376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.308385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.287 qpair failed and we were unable to recover it. 00:33:18.287 [2024-04-17 10:29:51.308476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.308572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.308581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.287 qpair failed and we were unable to recover it. 00:33:18.287 [2024-04-17 10:29:51.308690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.308859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.308869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.287 qpair failed and we were unable to recover it. 00:33:18.287 [2024-04-17 10:29:51.309032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.309123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.309133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.287 qpair failed and we were unable to recover it. 00:33:18.287 [2024-04-17 10:29:51.309297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.309527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.309537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.287 qpair failed and we were unable to recover it. 00:33:18.287 [2024-04-17 10:29:51.309712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.309837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.309846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.287 qpair failed and we were unable to recover it. 00:33:18.287 [2024-04-17 10:29:51.309939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.310101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.310111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.287 qpair failed and we were unable to recover it. 00:33:18.287 [2024-04-17 10:29:51.310277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.310451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.310462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.287 qpair failed and we were unable to recover it. 00:33:18.287 [2024-04-17 10:29:51.310631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.310738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.310751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.287 qpair failed and we were unable to recover it. 00:33:18.287 [2024-04-17 10:29:51.310917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.311034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.311044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.287 qpair failed and we were unable to recover it. 00:33:18.287 [2024-04-17 10:29:51.311219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.311325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.311335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.287 qpair failed and we were unable to recover it. 00:33:18.287 [2024-04-17 10:29:51.311441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.311611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.311622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.287 qpair failed and we were unable to recover it. 00:33:18.287 [2024-04-17 10:29:51.311724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.311915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.311927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.287 qpair failed and we were unable to recover it. 00:33:18.287 [2024-04-17 10:29:51.312163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.312260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.312269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.287 qpair failed and we were unable to recover it. 00:33:18.287 [2024-04-17 10:29:51.312445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.312554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.312564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.287 qpair failed and we were unable to recover it. 00:33:18.287 [2024-04-17 10:29:51.312671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.312778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.312789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.287 qpair failed and we were unable to recover it. 00:33:18.287 [2024-04-17 10:29:51.312986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.313144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.313154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.287 qpair failed and we were unable to recover it. 00:33:18.287 [2024-04-17 10:29:51.313334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.313429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.313438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.287 qpair failed and we were unable to recover it. 00:33:18.287 [2024-04-17 10:29:51.313538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.313653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.313682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.287 qpair failed and we were unable to recover it. 00:33:18.287 [2024-04-17 10:29:51.313919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.314062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.314072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.287 qpair failed and we were unable to recover it. 00:33:18.287 [2024-04-17 10:29:51.314195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.314376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.287 [2024-04-17 10:29:51.314387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.287 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.314483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.314573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.314583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.314702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.314793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.314802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.314968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.315132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.315142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.315255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.315360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.315370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.315531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.315709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.315720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.315863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.316043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.316053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.316169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.316281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.316291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.316405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.316571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.316582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.316742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.316863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.316874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.317057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.317226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.317236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.317362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.317530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.317540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.317794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.317915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.317926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.318037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.318134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.318144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.318242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.318427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.318437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.318532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.318699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.318710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.318837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.319011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.319022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.319110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.319283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.319294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.319498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.319796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.319811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.319986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.320100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.320110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.320341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.320474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.320483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.320653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.320842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.320852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.321027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.321201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.321211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.321379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.321491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.321501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.321676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.321836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.321847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.322108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.322199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.322209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.322323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.322450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.322460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.322567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.322747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.322758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.322875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.322966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.322976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.288 qpair failed and we were unable to recover it. 00:33:18.288 [2024-04-17 10:29:51.323151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.288 [2024-04-17 10:29:51.323317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.323328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.289 qpair failed and we were unable to recover it. 00:33:18.289 [2024-04-17 10:29:51.323442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.323627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.323638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.289 qpair failed and we were unable to recover it. 00:33:18.289 [2024-04-17 10:29:51.323905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.323995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.324006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.289 qpair failed and we were unable to recover it. 00:33:18.289 [2024-04-17 10:29:51.324205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.324373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.324383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.289 qpair failed and we were unable to recover it. 00:33:18.289 [2024-04-17 10:29:51.324553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.324667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.324677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.289 qpair failed and we were unable to recover it. 00:33:18.289 [2024-04-17 10:29:51.324861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.325043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.325053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.289 qpair failed and we were unable to recover it. 00:33:18.289 [2024-04-17 10:29:51.325193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.325366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.325376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.289 qpair failed and we were unable to recover it. 00:33:18.289 [2024-04-17 10:29:51.325478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.325655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.325665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.289 qpair failed and we were unable to recover it. 00:33:18.289 [2024-04-17 10:29:51.325866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.326094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.326104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.289 qpair failed and we were unable to recover it. 00:33:18.289 [2024-04-17 10:29:51.326225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.326317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.326327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.289 qpair failed and we were unable to recover it. 00:33:18.289 [2024-04-17 10:29:51.326508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.326739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.326750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.289 qpair failed and we were unable to recover it. 00:33:18.289 [2024-04-17 10:29:51.326865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.327058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.327068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.289 qpair failed and we were unable to recover it. 00:33:18.289 [2024-04-17 10:29:51.327169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.327277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.327288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.289 qpair failed and we were unable to recover it. 00:33:18.289 [2024-04-17 10:29:51.327395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.327507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.327517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.289 qpair failed and we were unable to recover it. 00:33:18.289 [2024-04-17 10:29:51.327775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.327955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.327965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.289 qpair failed and we were unable to recover it. 00:33:18.289 [2024-04-17 10:29:51.328076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.328175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.328184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.289 qpair failed and we were unable to recover it. 00:33:18.289 [2024-04-17 10:29:51.328295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.328443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.328454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.289 qpair failed and we were unable to recover it. 00:33:18.289 [2024-04-17 10:29:51.328636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.328844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.328855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.289 qpair failed and we were unable to recover it. 00:33:18.289 [2024-04-17 10:29:51.328967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.329060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.329070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.289 qpair failed and we were unable to recover it. 00:33:18.289 [2024-04-17 10:29:51.329306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.329469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.329481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.289 qpair failed and we were unable to recover it. 00:33:18.289 [2024-04-17 10:29:51.329660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.329777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.329788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.289 qpair failed and we were unable to recover it. 00:33:18.289 [2024-04-17 10:29:51.329968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.330211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.330222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.289 qpair failed and we were unable to recover it. 00:33:18.289 [2024-04-17 10:29:51.330317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.330433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.289 [2024-04-17 10:29:51.330443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.289 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.330608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.330729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.330740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.330935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.331138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.331149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.331362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.331542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.331552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.331728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.331862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.331873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.331968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.332080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.332090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.332293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.332477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.332487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.332636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.332818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.332829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.332947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.333058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.333069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.333318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.333430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.333439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.333538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.333665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.333677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.333844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.334034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.334044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.334142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.334299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.334310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.334432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.334544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.334555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.334666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.334766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.334777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.334946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.335061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.335071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.335249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.335406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.335416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.335509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.335602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.335611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.335847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.336026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.336036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.336224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.336330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.336340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.336463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.336578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.336588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.336690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.336797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.336807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.336923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.337086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.337096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.337272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.337375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.337385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.337587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.337704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.337714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.337884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.338143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.338153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.338268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.338470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.338480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.338654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.338830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.338839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.339075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.339302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.339312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.290 qpair failed and we were unable to recover it. 00:33:18.290 [2024-04-17 10:29:51.339492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.290 [2024-04-17 10:29:51.339616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.339626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.339810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.339918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.339927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.340051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.340233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.340243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.340421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.340669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.340678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.340874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.341039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.341048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.341176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.341352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.341362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.341468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.341578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.341588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.341699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.341864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.341874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.342071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.342168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.342178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.342341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.342436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.342447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.342607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.342732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.342743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.342848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.342958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.342967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.343146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.343272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.343282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.343460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.343659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.343669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.343793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.343971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.343981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.344082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.344176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.344186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.344305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.344478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.344487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.344600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.344721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.344731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.344902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.345007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.345017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.345105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.345214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.345223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.345414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.345518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.345527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.345701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.345803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.345813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.345922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.346080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.346089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.346198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.346455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.346465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.346627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.346739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.346749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.346848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.347022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.347032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.347137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.347248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.347258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.347437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.347606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.347616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.291 [2024-04-17 10:29:51.347730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.347827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.291 [2024-04-17 10:29:51.347837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.291 qpair failed and we were unable to recover it. 00:33:18.292 [2024-04-17 10:29:51.348016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.348224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.348234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.292 qpair failed and we were unable to recover it. 00:33:18.292 [2024-04-17 10:29:51.348339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.348469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.348479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.292 qpair failed and we were unable to recover it. 00:33:18.292 [2024-04-17 10:29:51.348748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.348909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.348918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.292 qpair failed and we were unable to recover it. 00:33:18.292 [2024-04-17 10:29:51.349034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.349296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.349305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.292 qpair failed and we were unable to recover it. 00:33:18.292 [2024-04-17 10:29:51.349473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.349581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.349591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.292 qpair failed and we were unable to recover it. 00:33:18.292 [2024-04-17 10:29:51.349699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.349890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.349900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.292 qpair failed and we were unable to recover it. 00:33:18.292 [2024-04-17 10:29:51.350076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.350302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.350312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.292 qpair failed and we were unable to recover it. 00:33:18.292 [2024-04-17 10:29:51.350477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.350585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.350595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.292 qpair failed and we were unable to recover it. 00:33:18.292 [2024-04-17 10:29:51.350834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.351015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.351024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.292 qpair failed and we were unable to recover it. 00:33:18.292 [2024-04-17 10:29:51.351187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.351288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.351297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.292 qpair failed and we were unable to recover it. 00:33:18.292 [2024-04-17 10:29:51.351397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.351496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.351505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.292 qpair failed and we were unable to recover it. 00:33:18.292 [2024-04-17 10:29:51.351607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.351865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.351875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.292 qpair failed and we were unable to recover it. 00:33:18.292 [2024-04-17 10:29:51.351999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.352221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.352231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.292 qpair failed and we were unable to recover it. 00:33:18.292 [2024-04-17 10:29:51.352325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.352453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.352462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.292 qpair failed and we were unable to recover it. 00:33:18.292 [2024-04-17 10:29:51.352590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.352759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.352769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.292 qpair failed and we were unable to recover it. 00:33:18.292 [2024-04-17 10:29:51.352875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.352977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.352987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.292 qpair failed and we were unable to recover it. 00:33:18.292 [2024-04-17 10:29:51.353089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.353198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.353208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.292 qpair failed and we were unable to recover it. 00:33:18.292 [2024-04-17 10:29:51.353300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.353462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.353472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.292 qpair failed and we were unable to recover it. 00:33:18.292 [2024-04-17 10:29:51.353703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.353883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.353893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.292 qpair failed and we were unable to recover it. 00:33:18.292 [2024-04-17 10:29:51.354069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.354182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.354191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.292 qpair failed and we were unable to recover it. 00:33:18.292 [2024-04-17 10:29:51.354305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.354486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.354495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.292 qpair failed and we were unable to recover it. 00:33:18.292 [2024-04-17 10:29:51.354635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.354804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.354814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.292 qpair failed and we were unable to recover it. 00:33:18.292 [2024-04-17 10:29:51.354976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.355205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.355214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.292 qpair failed and we were unable to recover it. 00:33:18.292 [2024-04-17 10:29:51.355467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.355576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.355586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.292 qpair failed and we were unable to recover it. 00:33:18.292 [2024-04-17 10:29:51.355703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.355794] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:18.292 [2024-04-17 10:29:51.355926] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:18.292 [2024-04-17 10:29:51.355935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.355938] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:18.292 [2024-04-17 10:29:51.355944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.292 qpair failed and we were unable to recover it. 00:33:18.292 [2024-04-17 10:29:51.355948] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:18.292 [2024-04-17 10:29:51.356042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.356203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.356212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.292 qpair failed and we were unable to recover it. 00:33:18.292 [2024-04-17 10:29:51.356419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.356419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:33:18.292 [2024-04-17 10:29:51.356526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.292 [2024-04-17 10:29:51.356534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.292 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.356510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:33:18.293 [2024-04-17 10:29:51.356651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:33:18.293 [2024-04-17 10:29:51.356726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.356664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:33:18.293 [2024-04-17 10:29:51.356899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.356909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.293 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.357014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.357139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.357148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.293 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.357316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.357491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.357500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.293 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.357613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.357727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.357739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.293 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.357850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.357959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.357968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.293 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.358135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.358252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.358262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.293 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.358453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.358567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.358576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.293 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.358737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.358853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.358864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.293 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.359026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.359193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.359203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.293 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.359409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.359591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.359601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.293 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.359715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.359807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.359817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.293 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.359989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.360135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.360145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.293 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.360325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.360554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.360563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.293 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.360757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.360850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.360860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.293 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.360976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.361139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.361149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.293 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.361382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.361487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.361497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.293 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.361666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.361861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.361871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.293 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.362185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.362302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.362312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.293 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.362459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.362722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.362732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.293 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.362903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.362995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.363004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.293 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.363168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.363341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.363351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.293 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.363473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.363573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.363583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.293 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.363693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.363808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.363818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.293 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.364001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.364229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.364238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.293 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.364409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.364523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.364532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.293 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.364684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.364846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.364855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.293 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.364970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.365146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.365155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.293 qpair failed and we were unable to recover it. 00:33:18.293 [2024-04-17 10:29:51.365239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.293 [2024-04-17 10:29:51.365342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.365351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.294 qpair failed and we were unable to recover it. 00:33:18.294 [2024-04-17 10:29:51.365511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.365657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.365667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.294 qpair failed and we were unable to recover it. 00:33:18.294 [2024-04-17 10:29:51.365848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.366011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.366020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.294 qpair failed and we were unable to recover it. 00:33:18.294 [2024-04-17 10:29:51.366114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.366233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.366243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.294 qpair failed and we were unable to recover it. 00:33:18.294 [2024-04-17 10:29:51.366473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.366589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.366602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.294 qpair failed and we were unable to recover it. 00:33:18.294 [2024-04-17 10:29:51.366696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.366837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.366846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.294 qpair failed and we were unable to recover it. 00:33:18.294 [2024-04-17 10:29:51.367013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.367187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.367197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.294 qpair failed and we were unable to recover it. 00:33:18.294 [2024-04-17 10:29:51.367339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.367505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.367515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.294 qpair failed and we were unable to recover it. 00:33:18.294 [2024-04-17 10:29:51.367685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.367788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.367798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.294 qpair failed and we were unable to recover it. 00:33:18.294 [2024-04-17 10:29:51.367905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.368083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.368091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.294 qpair failed and we were unable to recover it. 00:33:18.294 [2024-04-17 10:29:51.368253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.368488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.368497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.294 qpair failed and we were unable to recover it. 00:33:18.294 [2024-04-17 10:29:51.368620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.368722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.368733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.294 qpair failed and we were unable to recover it. 00:33:18.294 [2024-04-17 10:29:51.368895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.368987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.368996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.294 qpair failed and we were unable to recover it. 00:33:18.294 [2024-04-17 10:29:51.369170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.369363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.369374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.294 qpair failed and we were unable to recover it. 00:33:18.294 [2024-04-17 10:29:51.369508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.369601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.369612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.294 qpair failed and we were unable to recover it. 00:33:18.294 [2024-04-17 10:29:51.369708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.369811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.369820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.294 qpair failed and we were unable to recover it. 00:33:18.294 [2024-04-17 10:29:51.369931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.370114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.370124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.294 qpair failed and we were unable to recover it. 00:33:18.294 [2024-04-17 10:29:51.370228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.370323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.370332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.294 qpair failed and we were unable to recover it. 00:33:18.294 [2024-04-17 10:29:51.370592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.370702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.370712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.294 qpair failed and we were unable to recover it. 00:33:18.294 [2024-04-17 10:29:51.370827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.370914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.370924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.294 qpair failed and we were unable to recover it. 00:33:18.294 [2024-04-17 10:29:51.371096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.371263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.371272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.294 qpair failed and we were unable to recover it. 00:33:18.294 [2024-04-17 10:29:51.371372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.371477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.371487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.294 qpair failed and we were unable to recover it. 00:33:18.294 [2024-04-17 10:29:51.371651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.371748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.371758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.294 qpair failed and we were unable to recover it. 00:33:18.294 [2024-04-17 10:29:51.371874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.372044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.372054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.294 qpair failed and we were unable to recover it. 00:33:18.294 [2024-04-17 10:29:51.372216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.294 [2024-04-17 10:29:51.372474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.372487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.372651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.372753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.372762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.372859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.372985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.372994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.373097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.373188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.373197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.373309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.373404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.373413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.373673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.373768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.373777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.373876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.374039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.374049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.374151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.374456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.374466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.374668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.374843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.374853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.374947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.375037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.375046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.375159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.375361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.375372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.375491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.375593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.375602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.375764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.375956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.375965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.376173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.376278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.376287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.376381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.376518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.376528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.376719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.376898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.376908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.377015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.377211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.377220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.377397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.377488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.377497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.377589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.377697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.377706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.377814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.377973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.377982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.378087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.378263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.378272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.378366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.378466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.378475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.378705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.378798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.378807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.378914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.379094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.379103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.379210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.379299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.379307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.379427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.379586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.379596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.379754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.379866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.379876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.380002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.380161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.380171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.295 [2024-04-17 10:29:51.380335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.380443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.295 [2024-04-17 10:29:51.380452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.295 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.380707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.380886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.380896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.381128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.381273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.381283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.381387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.381563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.381573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.381675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.381782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.381792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.381972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.382169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.382179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.382295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.382474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.382482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.382601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.382702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.382712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.382804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.382917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.382926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.383110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.383198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.383206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.383370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.383536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.383546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.383776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.383891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.383901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.384003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.384164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.384173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.384283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.384451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.384460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.384632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.384816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.384825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.385004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.385163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.385173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.385268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.385363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.385372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.385476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.385566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.385575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.385673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.385837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.385846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.385940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.386098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.386107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.386338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.386460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.386469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.386659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.386753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.386763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.386926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.387106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.387115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.387379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.387497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.387506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.387709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.387817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.387826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.388020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.388179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.388188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.388285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.388488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.388497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.388698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.388813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.388823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.388991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.389190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.296 [2024-04-17 10:29:51.389199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.296 qpair failed and we were unable to recover it. 00:33:18.296 [2024-04-17 10:29:51.389397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.389494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.389503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.389628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.389800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.389809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.389932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.390113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.390122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.390289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.390422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.390431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.390607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.390719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.390729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.390905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.391112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.391121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.391313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.391419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.391428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.391517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.391627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.391636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.391805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.391983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.391992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.392222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.392449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.392459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.392603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.392706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.392715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.392886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.393044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.393053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.393217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.393352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.393361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.393614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.393730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.393740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.393834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.393934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.393943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.394108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.394202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.394211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.394371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.394621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.394631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.394741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.394990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.395000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.395183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.395382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.395390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.395578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.395701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.395711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.395827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.395945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.395954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.396068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.396229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.396238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.396397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.396566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.396575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.396697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.396869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.396878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.396970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.397231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.397239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.397421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.397597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.397606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.397782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.397948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.397957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.398070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.398247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.398256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.297 qpair failed and we were unable to recover it. 00:33:18.297 [2024-04-17 10:29:51.398443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.297 [2024-04-17 10:29:51.398612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.398621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.398825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.398936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.398945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.399230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.399451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.399460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.399637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.399829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.399838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.399949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.400055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.400064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.400333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.400495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.400505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.400741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.401023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.401032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.401152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.401313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.401322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.401569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.401747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.401756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.402002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.402122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.402131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.402246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.402404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.402413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.402587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.402684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.402693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.402857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.402950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.402959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.403144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.403257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.403267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.403430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.403527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.403536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.403652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.403759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.403768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.403897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.404060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.404069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.404229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.404421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.404430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.404589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.404707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.404717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.404906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.405133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.405142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.405328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.405444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.405453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.405554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.405778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.405788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.406057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.406283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.406292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.406388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.406652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.406662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.406842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.407022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.407031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.407317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.407494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.407503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.407678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.407957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.407967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.408204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.408367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.408376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.298 qpair failed and we were unable to recover it. 00:33:18.298 [2024-04-17 10:29:51.408605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.298 [2024-04-17 10:29:51.408861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.408871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.299 qpair failed and we were unable to recover it. 00:33:18.299 [2024-04-17 10:29:51.409126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.409229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.409238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.299 qpair failed and we were unable to recover it. 00:33:18.299 [2024-04-17 10:29:51.409406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.409594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.409604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.299 qpair failed and we were unable to recover it. 00:33:18.299 [2024-04-17 10:29:51.409725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.409908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.409918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.299 qpair failed and we were unable to recover it. 00:33:18.299 [2024-04-17 10:29:51.410094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.410267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.410276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.299 qpair failed and we were unable to recover it. 00:33:18.299 [2024-04-17 10:29:51.410470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.410560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.410568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.299 qpair failed and we were unable to recover it. 00:33:18.299 [2024-04-17 10:29:51.410687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.410790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.410799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.299 qpair failed and we were unable to recover it. 00:33:18.299 [2024-04-17 10:29:51.411054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.411159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.411168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.299 qpair failed and we were unable to recover it. 00:33:18.299 [2024-04-17 10:29:51.411340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.411622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.411631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.299 qpair failed and we were unable to recover it. 00:33:18.299 [2024-04-17 10:29:51.411743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.411938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.411947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.299 qpair failed and we were unable to recover it. 00:33:18.299 [2024-04-17 10:29:51.412048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.412215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.412225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.299 qpair failed and we were unable to recover it. 00:33:18.299 [2024-04-17 10:29:51.412397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.412569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.412578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.299 qpair failed and we were unable to recover it. 00:33:18.299 [2024-04-17 10:29:51.412757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.412846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.412855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.299 qpair failed and we were unable to recover it. 00:33:18.299 [2024-04-17 10:29:51.413112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.413348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.413357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.299 qpair failed and we were unable to recover it. 00:33:18.299 [2024-04-17 10:29:51.413455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.413701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.413711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.299 qpair failed and we were unable to recover it. 00:33:18.299 [2024-04-17 10:29:51.413942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.414148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.414157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.299 qpair failed and we were unable to recover it. 00:33:18.299 [2024-04-17 10:29:51.414328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.414453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.414462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.299 qpair failed and we were unable to recover it. 00:33:18.299 [2024-04-17 10:29:51.414640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.414882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.414891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.299 qpair failed and we were unable to recover it. 00:33:18.299 [2024-04-17 10:29:51.415016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.415272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.415281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.299 qpair failed and we were unable to recover it. 00:33:18.299 [2024-04-17 10:29:51.415410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.415501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.415511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.299 qpair failed and we were unable to recover it. 00:33:18.299 [2024-04-17 10:29:51.415695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.415855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.415864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.299 qpair failed and we were unable to recover it. 00:33:18.299 [2024-04-17 10:29:51.415976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.416233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.416241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.299 qpair failed and we were unable to recover it. 00:33:18.299 [2024-04-17 10:29:51.416416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.416523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.416531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.299 qpair failed and we were unable to recover it. 00:33:18.299 [2024-04-17 10:29:51.416690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.416810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.416818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.299 qpair failed and we were unable to recover it. 00:33:18.299 [2024-04-17 10:29:51.416935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.417179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.299 [2024-04-17 10:29:51.417188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.417419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.417585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.417594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.417766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.417933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.417941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.418102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.418358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.418367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.418537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.418705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.418715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.418928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.419043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.419051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.419226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.419395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.419404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.419564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.419743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.419752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.419950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.420049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.420058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.420257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.420371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.420380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.420556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.420752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.420761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.420923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.421020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.421029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.421205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.421374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.421383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.421564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.421751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.421761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.421932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.422162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.422173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.422369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.422463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.422471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.422577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.422756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.422765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.422877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.423001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.423010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.423171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.423284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.423293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.423460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.423620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.423629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.423815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.423993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.424002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.424162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.424281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.424290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.424488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.424683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.424692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.424850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.424988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.424997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.425116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.425360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.425370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.425599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.425778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.425787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.425954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.426131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.426139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.426337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.426512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.426521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.426621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.426875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.426885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.300 qpair failed and we were unable to recover it. 00:33:18.300 [2024-04-17 10:29:51.427048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.300 [2024-04-17 10:29:51.427204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.427213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.427448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.427574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.427584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.427705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.427875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.427884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.428111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.428225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.428234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.428413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.428657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.428667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.428909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.429027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.429038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.429287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.429451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.429461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.429578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.429751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.429760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.429868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.429977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.429986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.430188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.430277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.430287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.430454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.430692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.430701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.430813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.431042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.431051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.431177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.431293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.431301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.431467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.431696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.431705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.431817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.432010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.432019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.432121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.432279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.432288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.432474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.432585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.432594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.432684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.432879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.432888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.432993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.433152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.433161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.433356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.433514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.433523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.433634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.433805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.433814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.433988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.434162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.434170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.434402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.434545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.434554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.434671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.434829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.434839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.434947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.435053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.435062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.435305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.435412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.435421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.435530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.435708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.435717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.435888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.436047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.436055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.301 [2024-04-17 10:29:51.436173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.436281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.301 [2024-04-17 10:29:51.436290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.301 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.436451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.436575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.436584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.436765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.436946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.436955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.437122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.437288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.437297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.437534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.437714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.437723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.437870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.438100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.438109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.438271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.438437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.438446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.438609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.438844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.438853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.438971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.439082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.439091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.439212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.439375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.439383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.439558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.439725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.439734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.439837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.440034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.440043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.440204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.440360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.440369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.440475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.440652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.440661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.440856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.441118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.441127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.441249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.441470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.441479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.441754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.441844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.441854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.442034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.442193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.442202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.442453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.442612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.442621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.442710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.442879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.442888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.443012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.443125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.443134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.443247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.443419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.443428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.443678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.443754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.443763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.443855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.443977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.443986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.444108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.444265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.444273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.444531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.444636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.444651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.444743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.444928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.444937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.445038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.445171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.445180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.445295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.445388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.445397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.302 qpair failed and we were unable to recover it. 00:33:18.302 [2024-04-17 10:29:51.445605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.302 [2024-04-17 10:29:51.445766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.445776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.303 qpair failed and we were unable to recover it. 00:33:18.303 [2024-04-17 10:29:51.445974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.446148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.446157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.303 qpair failed and we were unable to recover it. 00:33:18.303 [2024-04-17 10:29:51.446315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.446426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.446434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.303 qpair failed and we were unable to recover it. 00:33:18.303 [2024-04-17 10:29:51.446546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.446708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.446717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.303 qpair failed and we were unable to recover it. 00:33:18.303 [2024-04-17 10:29:51.446971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.447145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.447154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.303 qpair failed and we were unable to recover it. 00:33:18.303 [2024-04-17 10:29:51.447355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.447524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.447533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.303 qpair failed and we were unable to recover it. 00:33:18.303 [2024-04-17 10:29:51.447720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.447901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.447911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.303 qpair failed and we were unable to recover it. 00:33:18.303 [2024-04-17 10:29:51.448142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.448319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.448328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.303 qpair failed and we were unable to recover it. 00:33:18.303 [2024-04-17 10:29:51.448510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.448764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.448773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.303 qpair failed and we were unable to recover it. 00:33:18.303 [2024-04-17 10:29:51.448955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.449210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.449220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.303 qpair failed and we were unable to recover it. 00:33:18.303 [2024-04-17 10:29:51.449381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.449539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.449548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.303 qpair failed and we were unable to recover it. 00:33:18.303 [2024-04-17 10:29:51.449642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.449830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.449839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.303 qpair failed and we were unable to recover it. 00:33:18.303 [2024-04-17 10:29:51.450016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.450135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.450144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.303 qpair failed and we were unable to recover it. 00:33:18.303 [2024-04-17 10:29:51.450345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.450532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.450541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.303 qpair failed and we were unable to recover it. 00:33:18.303 [2024-04-17 10:29:51.450716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.450830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.450839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.303 qpair failed and we were unable to recover it. 00:33:18.303 [2024-04-17 10:29:51.451044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.451297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.451307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.303 qpair failed and we were unable to recover it. 00:33:18.303 [2024-04-17 10:29:51.451539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.451722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.451732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.303 qpair failed and we were unable to recover it. 00:33:18.303 [2024-04-17 10:29:51.451852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.452067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.452076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.303 qpair failed and we were unable to recover it. 00:33:18.303 [2024-04-17 10:29:51.452311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.452539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.452548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.303 qpair failed and we were unable to recover it. 00:33:18.303 [2024-04-17 10:29:51.452787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.452966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.452975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.303 qpair failed and we were unable to recover it. 00:33:18.303 [2024-04-17 10:29:51.453092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.453208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.453216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.303 qpair failed and we were unable to recover it. 00:33:18.303 [2024-04-17 10:29:51.453333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.453434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.453444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.303 qpair failed and we were unable to recover it. 00:33:18.303 [2024-04-17 10:29:51.453609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.453822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.453832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.303 qpair failed and we were unable to recover it. 00:33:18.303 [2024-04-17 10:29:51.453949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.454154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.303 [2024-04-17 10:29:51.454163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.303 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.454336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.454514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.454522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.454779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.454952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.454961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.455123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.455315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.455324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.455497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.455609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.455618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.455791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.455980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.455989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.456108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.456232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.456241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.456342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.456519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.456528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.456651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.456812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.456821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.456937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.457020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.457029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.457262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.457366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.457375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.457533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.457621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.457630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.457826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.458001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.458010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.458115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.458234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.458243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.458420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.458581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.458589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.458790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.458982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.458991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.459219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.459377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.459385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.459560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.459675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.459684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.459775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.459958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.459966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.460135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.460224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.460233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.460380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.460575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.460584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.460866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.460972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.460981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.461208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.461387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.461396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.461653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.461824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.461833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.462085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.462218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.462227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.462401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.462583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.462592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.462715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.462890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.462899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.463045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.463250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.463259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.463447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.463708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.463718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.304 qpair failed and we were unable to recover it. 00:33:18.304 [2024-04-17 10:29:51.463858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.304 [2024-04-17 10:29:51.464085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.464094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.464274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.464461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.464470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.464570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.464658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.464667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.464868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.465039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.465047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.465228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.465405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.465414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.465672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.465821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.465830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.465923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.466188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.466197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.466393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.466500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.466509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.466661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.466837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.466846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.467006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.467175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.467184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.467454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.467572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.467581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.467760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.467876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.467885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.468110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.468371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.468380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.468560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.468738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.468747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.468906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.469085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.469094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.469223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.469317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.469326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.469494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.469690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.469699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.469880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.469999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.470008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.470170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.470333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.470341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.470445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.470652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.470661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.470872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.470971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.470980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.471151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.471270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.471279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.471384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.471499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.471508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.471608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.471800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.471810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.471936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.472028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.472037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.472268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.472536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.472545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.472794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.472953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.472961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.473140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.473309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.473320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.305 qpair failed and we were unable to recover it. 00:33:18.305 [2024-04-17 10:29:51.473498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.305 [2024-04-17 10:29:51.473649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.473658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.473840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.474014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.474022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.474116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.474402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.474410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.474570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.474686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.474695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.474798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.474987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.474996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.475222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.475424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.475432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.475607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.475799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.475808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.475987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.476160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.476169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.476343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.476625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.476634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.476813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.476930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.476941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.477102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.477307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.477316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.477416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.477526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.477535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.477814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.477989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.477998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.478277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.478436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.478445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.478532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.478650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.478659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.478752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.478979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.478988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.479164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.479410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.479418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.479590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.479786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.479795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.480077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.480171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.480180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.480369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.480603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.480614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.480738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.480973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.480982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.481249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.481351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.481360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.481535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.481820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.481829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.482091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.482325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.482334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.482609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.482788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.482797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.482901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.483058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.483067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.483248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.483477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.483486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.483741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.483854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.483863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.484049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.484331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.484339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.484504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.484690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.484699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.306 qpair failed and we were unable to recover it. 00:33:18.306 [2024-04-17 10:29:51.484918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.485075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.306 [2024-04-17 10:29:51.485083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.485336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.485596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.485604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.485880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.486051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.486060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.486228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.486419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.486427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.486658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.486885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.486894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.487077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.487346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.487354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.487593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.487753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.487762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.488012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.488195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.488203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.488390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.488650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.488659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.488921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.489095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.489103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.489364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.489632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.489641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.489939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.490198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.490207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.490466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.490627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.490636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.490849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.491008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.491017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.491180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.491356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.491365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.491523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.491771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.491780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.491959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.492197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.492206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.492463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.492723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.492734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.492905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.493158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.493167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.493425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.493657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.493666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.493843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.494090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.494099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.494360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.494531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.494539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.494658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.494914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.494923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.495102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.495388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.495396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.495650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.495837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.495846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.496004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.496209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.496218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.496393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.496553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.496562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.496824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.497096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.497105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.497354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.497525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.497534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.497792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.498025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.498034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.498268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.498431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.498440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.307 qpair failed and we were unable to recover it. 00:33:18.307 [2024-04-17 10:29:51.498617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.307 [2024-04-17 10:29:51.498868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.498878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.308 qpair failed and we were unable to recover it. 00:33:18.308 [2024-04-17 10:29:51.499158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.499318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.499327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.308 qpair failed and we were unable to recover it. 00:33:18.308 [2024-04-17 10:29:51.499487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.499663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.499672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.308 qpair failed and we were unable to recover it. 00:33:18.308 [2024-04-17 10:29:51.499936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.500156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.500165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.308 qpair failed and we were unable to recover it. 00:33:18.308 [2024-04-17 10:29:51.500444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.500700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.500710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.308 qpair failed and we were unable to recover it. 00:33:18.308 [2024-04-17 10:29:51.500958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.501167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.501176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.308 qpair failed and we were unable to recover it. 00:33:18.308 [2024-04-17 10:29:51.501267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.501520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.501528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.308 qpair failed and we were unable to recover it. 00:33:18.308 [2024-04-17 10:29:51.501714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.501955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.501964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.308 qpair failed and we were unable to recover it. 00:33:18.308 [2024-04-17 10:29:51.502238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.502399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.502407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.308 qpair failed and we were unable to recover it. 00:33:18.308 [2024-04-17 10:29:51.502576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.502862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.502871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.308 qpair failed and we were unable to recover it. 00:33:18.308 [2024-04-17 10:29:51.503101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.503377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.503386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.308 qpair failed and we were unable to recover it. 00:33:18.308 [2024-04-17 10:29:51.503500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.503765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.503775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.308 qpair failed and we were unable to recover it. 00:33:18.308 [2024-04-17 10:29:51.503953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.504180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.504189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.308 qpair failed and we were unable to recover it. 00:33:18.308 [2024-04-17 10:29:51.504287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.504521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.504530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.308 qpair failed and we were unable to recover it. 00:33:18.308 [2024-04-17 10:29:51.504657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.504883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.504892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.308 qpair failed and we were unable to recover it. 00:33:18.308 [2024-04-17 10:29:51.505005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.505263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.505274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.308 qpair failed and we were unable to recover it. 00:33:18.308 [2024-04-17 10:29:51.505457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.505718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.505727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.308 qpair failed and we were unable to recover it. 00:33:18.308 [2024-04-17 10:29:51.505988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.506311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.506320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.308 qpair failed and we were unable to recover it. 00:33:18.308 [2024-04-17 10:29:51.506561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.506834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.506843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.308 qpair failed and we were unable to recover it. 00:33:18.308 [2024-04-17 10:29:51.507120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.507301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.507310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.308 qpair failed and we were unable to recover it. 00:33:18.308 [2024-04-17 10:29:51.507573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.507809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.507818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.308 qpair failed and we were unable to recover it. 00:33:18.308 [2024-04-17 10:29:51.507946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.508189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.508197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.308 qpair failed and we were unable to recover it. 00:33:18.308 [2024-04-17 10:29:51.508359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.508531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.508540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.308 qpair failed and we were unable to recover it. 00:33:18.308 [2024-04-17 10:29:51.508714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.509000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.509008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.308 qpair failed and we were unable to recover it. 00:33:18.308 [2024-04-17 10:29:51.509124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.509371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.509380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.308 qpair failed and we were unable to recover it. 00:33:18.308 [2024-04-17 10:29:51.509560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.509746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.509755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.308 qpair failed and we were unable to recover it. 00:33:18.308 [2024-04-17 10:29:51.510015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.308 [2024-04-17 10:29:51.510192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.510201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.510511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.510685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.510694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.510890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.510999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.511008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.511186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.511358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.511366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.511530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.511696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.511705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.511885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.512066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.512074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.512309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.512593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.512602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.512851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.513100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.513109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.513286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.513545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.513553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.513798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.513985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.513994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.514120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.514371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.514379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.514540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.514718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.514727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.514972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.515228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.515236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.515479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.515658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.515667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.515930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.516048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.516057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.516311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.516539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.516548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.516739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.516990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.516999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.517229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.517429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.517438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.517608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.517885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.517895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.518153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.518334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.518343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.518598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.518786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.518795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.519094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.519326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.519335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.519591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.519864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.519874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.520107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.520374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.520382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.520611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.520792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.520801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.520975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.521204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.521213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.521465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.521641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.521653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.521836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.522041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.522050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.522226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.522456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.522465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.522640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.522823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.522832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.309 [2024-04-17 10:29:51.523121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.523309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.309 [2024-04-17 10:29:51.523318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.309 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.523598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.523881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.523890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.524119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.524290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.524299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.524548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.524671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.524680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.524863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.525145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.525153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.525338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.525568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.525576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.525799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.525976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.525985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.526292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.526569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.526577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.526793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.526904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.526913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.527140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.527310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.527319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.527578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.527811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.527820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.528087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.528274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.528282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.528469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.528657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.528667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.528789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.528966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.528975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.529217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.529477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.529485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.529659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.529916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.529925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.530104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.530280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.530289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.530573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.530827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.530837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.531069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.531315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.531324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.531519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.531718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.531727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.531890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.532075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.532084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.532177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.532429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.532438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.532608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.532855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.532865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.533119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.533384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.533392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.533638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.533841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.533850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.534050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.534303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.534312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.534486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.534771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.534780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.534993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.535195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.535204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.535467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.535722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.535731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.535984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.536180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.536188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.536347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.536536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.536544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.310 qpair failed and we were unable to recover it. 00:33:18.310 [2024-04-17 10:29:51.536719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.310 [2024-04-17 10:29:51.536941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.536950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.537070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.537243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.537252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.537438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.537670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.537681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.537895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.538153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.538162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.538402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.538516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.538525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.538700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.538983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.538992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.539178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.539384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.539393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.539555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.539841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.539850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.540087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.540334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.540342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.540540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.540783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.540792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.540964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.541174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.541183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.541362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.541534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.541543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.541703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.541954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.541965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.542156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.542395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.542404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.542610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.542844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.542853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.543110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.543311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.543319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.543575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.543749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.543758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.543934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.544051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.544060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.544319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.544438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.544447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.544655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.544859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.544868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.545121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.545294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.545303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.545513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.545688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.545697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.545871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.546133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.546144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.546319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.546496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.546505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.546609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.546865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.546874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.547094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.547374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.547383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.547648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.547906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.547915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.548079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.548311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.548320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.548569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.548802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.548811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.549010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.549260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.549269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.549429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.549666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.549675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.549905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.550163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.550172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.311 qpair failed and we were unable to recover it. 00:33:18.311 [2024-04-17 10:29:51.550346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.311 [2024-04-17 10:29:51.550471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.550481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.550742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.550860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.550869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.551129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.551383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.551391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.551659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.551860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.551869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.552048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.552212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.552221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.552402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.552656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.552665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.552933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.553096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.553104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.553317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.553541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.553550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.553731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.553980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.553989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.554167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.554280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.554288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.554531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.554732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.554741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.554991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.555157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.555166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.555398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.555583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.555591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.555843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.555965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.555974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.556204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.556455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.556464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.556711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.556894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.556904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.557066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.557321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.557330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.557584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.557760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.557769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.558021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.558191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.558200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.558470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.558638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.558650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.558821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.559050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.559059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.559303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.559533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.559541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.559663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.559763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.559772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.560030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.560141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.560150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.560455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.560633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.560641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.560848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.560973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.560981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.561244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.561448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.561457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.561710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.561828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.561837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.312 qpair failed and we were unable to recover it. 00:33:18.312 [2024-04-17 10:29:51.561996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.562276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.312 [2024-04-17 10:29:51.562285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.562543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.562751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.562760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.563015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.563260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.563269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.563442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.563616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.563625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.563870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.564086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.564095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.564325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.564487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.564495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.564751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.564989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.564997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.565257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.565363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.565372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.565533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.565771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.565780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.565962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.566194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.566202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.566456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.566702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.566712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.566944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.567131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.567140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.567320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.567494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.567505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.567681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.567847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.567856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.568086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.568290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.568298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.568542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.568769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.568780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.568950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.569109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.569117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.569282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.569370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.569380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.569563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.569830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.569840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.570102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.570207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.570216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.570488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.570689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.570698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.570791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.570957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.570967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.571202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.571394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.571405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.571639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.571841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.571850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.572060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.572326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.572335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.572526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.572781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.572792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.573050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.573278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.573288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.573413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.573655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.573664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.573828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.573941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.573950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.574161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.574371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.574380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.574652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.574907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.574916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.575171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.575420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.575429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.575596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.575771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.575780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.313 qpair failed and we were unable to recover it. 00:33:18.313 [2024-04-17 10:29:51.575883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.313 [2024-04-17 10:29:51.576059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.576068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.576234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.576404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.576412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.576594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.576853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.576863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.576994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.577161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.577172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.577390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.577502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.577511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.577676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.577872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.577882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.578062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.578156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.578165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.578338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.578508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.578517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.578770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.578886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.578895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.579017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.579270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.579279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.579597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.579706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.579716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.579881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.580041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.580050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.580263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.580454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.580463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.580719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.580845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.580854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.581027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.581190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.581199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.581294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.581412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.581421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.581600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.581835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.581844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.582037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.582266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.582274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.582508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.582719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.582728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.582908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.583120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.583129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.583391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.583595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.583604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.583842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.583957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.583966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.584228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.584490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.584498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.584626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.584757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.584768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.585001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.585107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.585116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.585392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.585624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.585632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.585798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.586030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.586039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.586320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.586570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.586578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.586743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.586985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.586994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.587188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.587278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.587287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.587527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.587702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.587712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.587918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.588028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.588037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.588245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.588451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.588460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.588730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.588965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.588974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.589149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.589350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.314 [2024-04-17 10:29:51.589359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.314 qpair failed and we were unable to recover it. 00:33:18.314 [2024-04-17 10:29:51.589617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.589836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.589845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.315 qpair failed and we were unable to recover it. 00:33:18.315 [2024-04-17 10:29:51.590128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.590326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.590335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.315 qpair failed and we were unable to recover it. 00:33:18.315 [2024-04-17 10:29:51.590611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.590846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.590855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.315 qpair failed and we were unable to recover it. 00:33:18.315 [2024-04-17 10:29:51.591083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.591219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.591228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.315 qpair failed and we were unable to recover it. 00:33:18.315 [2024-04-17 10:29:51.591426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.591544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.591553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.315 qpair failed and we were unable to recover it. 00:33:18.315 [2024-04-17 10:29:51.591727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.591889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.591898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.315 qpair failed and we were unable to recover it. 00:33:18.315 [2024-04-17 10:29:51.592009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.592291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.592300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.315 qpair failed and we were unable to recover it. 00:33:18.315 [2024-04-17 10:29:51.592472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.592633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.592642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.315 qpair failed and we were unable to recover it. 00:33:18.315 [2024-04-17 10:29:51.592916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.593093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.593101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.315 qpair failed and we were unable to recover it. 00:33:18.315 [2024-04-17 10:29:51.593208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.593400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.593410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.315 qpair failed and we were unable to recover it. 00:33:18.315 [2024-04-17 10:29:51.593689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.593932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.593941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.315 qpair failed and we were unable to recover it. 00:33:18.315 [2024-04-17 10:29:51.594114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.594286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.594295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.315 qpair failed and we were unable to recover it. 00:33:18.315 [2024-04-17 10:29:51.594406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.594661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.594671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.315 qpair failed and we were unable to recover it. 00:33:18.315 [2024-04-17 10:29:51.594781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.595009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.595018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.315 qpair failed and we were unable to recover it. 00:33:18.315 [2024-04-17 10:29:51.595221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.595410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.595419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.315 qpair failed and we were unable to recover it. 00:33:18.315 [2024-04-17 10:29:51.595545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.595709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.595719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.315 qpair failed and we were unable to recover it. 00:33:18.315 [2024-04-17 10:29:51.596005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.596165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.596173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.315 qpair failed and we were unable to recover it. 00:33:18.315 [2024-04-17 10:29:51.596395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.596567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.596576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.315 qpair failed and we were unable to recover it. 00:33:18.315 [2024-04-17 10:29:51.596815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.597088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.597096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.315 qpair failed and we were unable to recover it. 00:33:18.315 [2024-04-17 10:29:51.597331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.597511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.597520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.315 qpair failed and we were unable to recover it. 00:33:18.315 [2024-04-17 10:29:51.597752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.597938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.597947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.315 qpair failed and we were unable to recover it. 00:33:18.315 [2024-04-17 10:29:51.598051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.598227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.598235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.315 qpair failed and we were unable to recover it. 00:33:18.315 [2024-04-17 10:29:51.598424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.598626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.315 [2024-04-17 10:29:51.598635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.315 qpair failed and we were unable to recover it. 00:33:18.589 [2024-04-17 10:29:51.598863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.589 [2024-04-17 10:29:51.598974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.589 [2024-04-17 10:29:51.598983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.589 qpair failed and we were unable to recover it. 00:33:18.589 [2024-04-17 10:29:51.599094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.589 [2024-04-17 10:29:51.599326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.589 [2024-04-17 10:29:51.599335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.589 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.599564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.599806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.599815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.599980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.600159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.600168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.600452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.600628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.600637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.600776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.600968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.600976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.601224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.601490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.601499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.601749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.601928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.601937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.602169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.602348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.602357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.602631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.602841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.602850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.603047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.603167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.603176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.603352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.603463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.603471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.603684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.603872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.603882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.604063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.604336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.604345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.604533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.604724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.604733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.604921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.605049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.605057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.605324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.605441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.605450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.605573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.605828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.605837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.605960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.606061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.606070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.606188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.606374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.606383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.606618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.606886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.606895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.607181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.607282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.607291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.607525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.607755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.607766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.607995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.608277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.608286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.608541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.608702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.608711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.608947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.609053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.609061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.609231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.609461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.609470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.609651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.609922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.609931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.610090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.610272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.610280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.610512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.610747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.590 [2024-04-17 10:29:51.610757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.590 qpair failed and we were unable to recover it. 00:33:18.590 [2024-04-17 10:29:51.611003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.611173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.611181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.611384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.611583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.611592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.611840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.612010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.612021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.612275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.612401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.612411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.612671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.612923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.612932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.613189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.613370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.613379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.613557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.613843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.613852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.614023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.614141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.614150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.614311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.614569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.614578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.614758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.614916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.614925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.615197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.615370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.615378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.615632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.615863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.615872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.616145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.616403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.616413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.616659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.616922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.616931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.617185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.617439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.617447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.617622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.617801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.617810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.617936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.618108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.618117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.618306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.618446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.618455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.618651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.618827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.618836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.619096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.619278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.619287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.619544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.619647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.619656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.619791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.619969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.619978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.620153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.620443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.620451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.620575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.620833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.620843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.621005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.621237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.621246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.621362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.621448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.621456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.621725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.621951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.621959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.622158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.622387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.622396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.591 qpair failed and we were unable to recover it. 00:33:18.591 [2024-04-17 10:29:51.622577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.622759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.591 [2024-04-17 10:29:51.622769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.623017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.623245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.623254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.623507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.623760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.623770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.623980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.624214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.624223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.624424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.624653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.624663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.624853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.625031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.625039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.625209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.625383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.625392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.625630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.625909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.625918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.626167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.626339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.626347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.626507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.626668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.626677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.626955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.627194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.627202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.627463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.627670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.627680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.627859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.628057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.628066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.628244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.628432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.628441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.628612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.628811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.628821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.628926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.629085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.629094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.629329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.629500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.629509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.629797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.630054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.630062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.630346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.630603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.630611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.630773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.630948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.630956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.631115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.631413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.631422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.631594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.631849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.631858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.632112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.632387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.632396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.632516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.632686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.632696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.632808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.632910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.632919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.633176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.633406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.633415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.633589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.633724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.633734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.633909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.634140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.634149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.634275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.634531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.592 [2024-04-17 10:29:51.634540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.592 qpair failed and we were unable to recover it. 00:33:18.592 [2024-04-17 10:29:51.634750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.634848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.634858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.635109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.635311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.635320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.635574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.635778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.635787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.636018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.636245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.636254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.636541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.636712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.636721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.636913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.637091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.637099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.637304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.637478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.637487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.637747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.637918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.637927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.638109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.638338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.638347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.638586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.638763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.638773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.638936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.639110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.639118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.639374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.639537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.639545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.639792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.640035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.640045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.640252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.640353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.640362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.640531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.640806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.640815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.641044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.641248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.641257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.641420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.641583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.641592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.641846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.642055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.642064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.642279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.642452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.642460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.642721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.643006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.643015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.643284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.643524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.643533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.643699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.643952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.643960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.644190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.644471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.644480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.644660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.644764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.644772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.645032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.645138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.645146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.645259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.645454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.645463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.645573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.645730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.645739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.645904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.646163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.646172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.593 qpair failed and we were unable to recover it. 00:33:18.593 [2024-04-17 10:29:51.646334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.593 [2024-04-17 10:29:51.646492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.646500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.594 qpair failed and we were unable to recover it. 00:33:18.594 [2024-04-17 10:29:51.646666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.646927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.646936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.594 qpair failed and we were unable to recover it. 00:33:18.594 [2024-04-17 10:29:51.647120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.647417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.647426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.594 qpair failed and we were unable to recover it. 00:33:18.594 [2024-04-17 10:29:51.647676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.647959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.647967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.594 qpair failed and we were unable to recover it. 00:33:18.594 [2024-04-17 10:29:51.648138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.648367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.648376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.594 qpair failed and we were unable to recover it. 00:33:18.594 [2024-04-17 10:29:51.648624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.648785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.648795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.594 qpair failed and we were unable to recover it. 00:33:18.594 [2024-04-17 10:29:51.648968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.649072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.649081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.594 qpair failed and we were unable to recover it. 00:33:18.594 [2024-04-17 10:29:51.649339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.649618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.649628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.594 qpair failed and we were unable to recover it. 00:33:18.594 [2024-04-17 10:29:51.649818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.649991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.650000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.594 qpair failed and we were unable to recover it. 00:33:18.594 [2024-04-17 10:29:51.650177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.650335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.650343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.594 qpair failed and we were unable to recover it. 00:33:18.594 [2024-04-17 10:29:51.650599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.650829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.650838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.594 qpair failed and we were unable to recover it. 00:33:18.594 [2024-04-17 10:29:51.651075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.651241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.651250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.594 qpair failed and we were unable to recover it. 00:33:18.594 [2024-04-17 10:29:51.651457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.651651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.651660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.594 qpair failed and we were unable to recover it. 00:33:18.594 [2024-04-17 10:29:51.651750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.651914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.651923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.594 qpair failed and we were unable to recover it. 00:33:18.594 [2024-04-17 10:29:51.652179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.652285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.652294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.594 qpair failed and we were unable to recover it. 00:33:18.594 [2024-04-17 10:29:51.652465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.652695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.652705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.594 qpair failed and we were unable to recover it. 00:33:18.594 [2024-04-17 10:29:51.652958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.653160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.653169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.594 qpair failed and we were unable to recover it. 00:33:18.594 [2024-04-17 10:29:51.653327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.653568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.653577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.594 qpair failed and we were unable to recover it. 00:33:18.594 [2024-04-17 10:29:51.653754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.653920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.653929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.594 qpair failed and we were unable to recover it. 00:33:18.594 [2024-04-17 10:29:51.654092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.654292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.654301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.594 qpair failed and we were unable to recover it. 00:33:18.594 [2024-04-17 10:29:51.654559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.654832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.654842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.594 qpair failed and we were unable to recover it. 00:33:18.594 [2024-04-17 10:29:51.655018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.655176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.655185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.594 qpair failed and we were unable to recover it. 00:33:18.594 [2024-04-17 10:29:51.655460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.655722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.594 [2024-04-17 10:29:51.655732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.594 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.655854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.656011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.656020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.656248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.656442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.656451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.656733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.656993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.657002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.657115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.657314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.657323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.657578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.657754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.657763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.657965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.658226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.658235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.658332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.658577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.658586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.658752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.658994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.659003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.659123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.659310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.659319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.659478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.659681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.659690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.659920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.660080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.660089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.660360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.660612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.660621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.660796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.661047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.661056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.661217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.661395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.661404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.661581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.661748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.661757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.661938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.662197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.662205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.662468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.662717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.662726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.662919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.663083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.663092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.663346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.663520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.663529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.663787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.663974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.663983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.664239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.664465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.664473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.664753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.664951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.664960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.665216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.665469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.665478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.665746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.665987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.665996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.666255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.666516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.666524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.666737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.667021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.667032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.667318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.667484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.667493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.667683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.667795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.667804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.595 qpair failed and we were unable to recover it. 00:33:18.595 [2024-04-17 10:29:51.667984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.595 [2024-04-17 10:29:51.668261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.668270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.668521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.668772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.668781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.669012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.669299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.669308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.669537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.669708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.669717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.669967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.670196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.670205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.670514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.670657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.670666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.670862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.671050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.671059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.671309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.671494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.671505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.671619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.671793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.671802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.671965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.672221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.672230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.672501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.672736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.672746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.672917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.673092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.673101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.673206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.673462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.673471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.673726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.673979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.673988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.674271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.674531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.674539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.674721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.674924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.674933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.675092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.675342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.675350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.675620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.675866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.675876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.676130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.676320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.676329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.676490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.676677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.676686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.676857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.677016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.677024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.677314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.677624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.677633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.677829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.678061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.678070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.678233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.678513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.678522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.678764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.679033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.679041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.679302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.679463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.679472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.679651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.679948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.679957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.680194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.680424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.680435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.596 [2024-04-17 10:29:51.680664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.680838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.596 [2024-04-17 10:29:51.680847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.596 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.681010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.681185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.681194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.681359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.681558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.681566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.681760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.682016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.682025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.682282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.682550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.682559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.682684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.682947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.682956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.683155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.683356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.683365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.683650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.683940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.683949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.684201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.684468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.684477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.684655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.684766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.684775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.685034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.685264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.685273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.685545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.685735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.685744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.686001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.686211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.686221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.686421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.686664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.686673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.686793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.687051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.687060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.687228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.687484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.687493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.687760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.687914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.687923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.688177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.688334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.688343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.688600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.688851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.688860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.689027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.689190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.689199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.689457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.689714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.689722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.689898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.690155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.690163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.690449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.690626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.690635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.690919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.691172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.691181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.691344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.691598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.691607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.691782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.692037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.692046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.692220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.692334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.692343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.692546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.692775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.692785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.692964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.693212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.693220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.597 qpair failed and we were unable to recover it. 00:33:18.597 [2024-04-17 10:29:51.693396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.597 [2024-04-17 10:29:51.693647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.693656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.693820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.694082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.694091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.694266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.694466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.694475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.694735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.694926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.694935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.695130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.695411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.695420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.695595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.695825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.695834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.695998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.696228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.696237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.696499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.696609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.696618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.696794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.696955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.696964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.697191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.697384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.697393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.697623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.697801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.697811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.698072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.698271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.698280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.698491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.698617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.698626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.698812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.698903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.698911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.699075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.699330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.699339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.699600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.699907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.699916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.700093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.700249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.700258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.700437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.700697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.700706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.700965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.701252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.701260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.701472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.701657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.701667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.701934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.702189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.702198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.702377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.702607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.702616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.702791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.702979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.702988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.703186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.703442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.703450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.703628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.703786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.703795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.704069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.704333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.704342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.704582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.704817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.704826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.705109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.705231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.705240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.598 qpair failed and we were unable to recover it. 00:33:18.598 [2024-04-17 10:29:51.705440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.705672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.598 [2024-04-17 10:29:51.705680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.599 qpair failed and we were unable to recover it. 00:33:18.599 [2024-04-17 10:29:51.705857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.706135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.706143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.599 qpair failed and we were unable to recover it. 00:33:18.599 [2024-04-17 10:29:51.706399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.706624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.706633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.599 qpair failed and we were unable to recover it. 00:33:18.599 [2024-04-17 10:29:51.706899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.707101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.707110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.599 qpair failed and we were unable to recover it. 00:33:18.599 [2024-04-17 10:29:51.707285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.707539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.707547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.599 qpair failed and we were unable to recover it. 00:33:18.599 [2024-04-17 10:29:51.707798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.707983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.707993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.599 qpair failed and we were unable to recover it. 00:33:18.599 [2024-04-17 10:29:51.708197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.708322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.708331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.599 qpair failed and we were unable to recover it. 00:33:18.599 [2024-04-17 10:29:51.708559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.708806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.708816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.599 qpair failed and we were unable to recover it. 00:33:18.599 [2024-04-17 10:29:51.709106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.709217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.709226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.599 qpair failed and we were unable to recover it. 00:33:18.599 [2024-04-17 10:29:51.709414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.709641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.709654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.599 qpair failed and we were unable to recover it. 00:33:18.599 [2024-04-17 10:29:51.709759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.709936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.709944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.599 qpair failed and we were unable to recover it. 00:33:18.599 [2024-04-17 10:29:51.710106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.710284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.710293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.599 qpair failed and we were unable to recover it. 00:33:18.599 [2024-04-17 10:29:51.710466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.710751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.710761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.599 qpair failed and we were unable to recover it. 00:33:18.599 [2024-04-17 10:29:51.711050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.711303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.711312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.599 qpair failed and we were unable to recover it. 00:33:18.599 [2024-04-17 10:29:51.711537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.711716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.711725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.599 qpair failed and we were unable to recover it. 00:33:18.599 [2024-04-17 10:29:51.711900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.712076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.712084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.599 qpair failed and we were unable to recover it. 00:33:18.599 [2024-04-17 10:29:51.712257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.712490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.712498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.599 qpair failed and we were unable to recover it. 00:33:18.599 [2024-04-17 10:29:51.712785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.712946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.712955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.599 qpair failed and we were unable to recover it. 00:33:18.599 [2024-04-17 10:29:51.713214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.713376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.713384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.599 qpair failed and we were unable to recover it. 00:33:18.599 [2024-04-17 10:29:51.713667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.713847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.713856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.599 qpair failed and we were unable to recover it. 00:33:18.599 [2024-04-17 10:29:51.714114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.714274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.714282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.599 qpair failed and we were unable to recover it. 00:33:18.599 [2024-04-17 10:29:51.714532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.714717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.714726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.599 qpair failed and we were unable to recover it. 00:33:18.599 [2024-04-17 10:29:51.714971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.715202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.715211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.599 qpair failed and we were unable to recover it. 00:33:18.599 [2024-04-17 10:29:51.715482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.715669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.715679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.599 qpair failed and we were unable to recover it. 00:33:18.599 [2024-04-17 10:29:51.715795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.716020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.599 [2024-04-17 10:29:51.716029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.716268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.716463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.716472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.716712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.716952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.716961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.717245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.717354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.717362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.717597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.717766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.717776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.717968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.718060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.718068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.718227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.718390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.718399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.718593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.718830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.718839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.719110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.719369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.719378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.719630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.719882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.719891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.720068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.720252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.720261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.720517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.720776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.720785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.720896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.721145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.721154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.721345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.721592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.721600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.721858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.722043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.722053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.722243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.722496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.722505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.722670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.722907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.722917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.723184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.723394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.723403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.723650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.723899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.723907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.724192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.724307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.724316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.724483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.724661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.724671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.724874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.725032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.725040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.725211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.725461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.725470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.725704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.725978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.725987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.726244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.726499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.726508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.726760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.726923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.726932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.727211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.727476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.727485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.727731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.727964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.727974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.728177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.728372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.728381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.600 qpair failed and we were unable to recover it. 00:33:18.600 [2024-04-17 10:29:51.728539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.600 [2024-04-17 10:29:51.728803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.728813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.729076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.729321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.729329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.729536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.729694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.729704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.729881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.730043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.730051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.730184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.730362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.730370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.730624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.730820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.730830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.731062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.731308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.731316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.731421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.731675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.731684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.731968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.732129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.732138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.732395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.732625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.732633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.732907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.733077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.733086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.733193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.733367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.733376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.733534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.733725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.733735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.733895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.734013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.734022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.734330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.734594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.734604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.734844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.735040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.735049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.735312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.735561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.735570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.735733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.735934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.735943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.736173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.736359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.736367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.736598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.736766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.736775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.737019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.737195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.737205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.737382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.737603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.737612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.737863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.738023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.738032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.738285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.738463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.738471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.738646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.738814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.738823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.738994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.739260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.739268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.739450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.739619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.739628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.739804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.740078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.740087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.601 [2024-04-17 10:29:51.740349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.740473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.601 [2024-04-17 10:29:51.740482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.601 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.740744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.740904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.740913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.741020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.741276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.741286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.741518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.741789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.741799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.741959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.742131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.742141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.742310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.742476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.742485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.742665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.742923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.742932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.743196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.743391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.743400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.743659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.743917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.743926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.744196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.744435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.744444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.744728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.744967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.744976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.745217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.745391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.745399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.745572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.745864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.745876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.746157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.746261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.746270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.746450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.746636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.746648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.746910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.747109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.747118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.747278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.747479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.747488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.747726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.747911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.747920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.748194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.748425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.748433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.748670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.748959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.748967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.749205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.749486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.749495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.749768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.749998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.750007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.750194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.750435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.750445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.750652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.750914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.750923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.751086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.751369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.751377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.751607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.751915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.751925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.752086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.752245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.752253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.752485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.752744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.752753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.753018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.753188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.602 [2024-04-17 10:29:51.753197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.602 qpair failed and we were unable to recover it. 00:33:18.602 [2024-04-17 10:29:51.753427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.753682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.753692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.603 qpair failed and we were unable to recover it. 00:33:18.603 [2024-04-17 10:29:51.753968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.754141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.754150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.603 qpair failed and we were unable to recover it. 00:33:18.603 [2024-04-17 10:29:51.754379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.754538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.754547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.603 qpair failed and we were unable to recover it. 00:33:18.603 [2024-04-17 10:29:51.754652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.754903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.754912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.603 qpair failed and we were unable to recover it. 00:33:18.603 [2024-04-17 10:29:51.755106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.755303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.755312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.603 qpair failed and we were unable to recover it. 00:33:18.603 [2024-04-17 10:29:51.755569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.755741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.755750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.603 qpair failed and we were unable to recover it. 00:33:18.603 [2024-04-17 10:29:51.755957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.756116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.756125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.603 qpair failed and we were unable to recover it. 00:33:18.603 [2024-04-17 10:29:51.756285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.756457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.756466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.603 qpair failed and we were unable to recover it. 00:33:18.603 [2024-04-17 10:29:51.756674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.756833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.756842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.603 qpair failed and we were unable to recover it. 00:33:18.603 [2024-04-17 10:29:51.757049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.757288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.757296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.603 qpair failed and we were unable to recover it. 00:33:18.603 [2024-04-17 10:29:51.757481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.757688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.757697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.603 qpair failed and we were unable to recover it. 00:33:18.603 [2024-04-17 10:29:51.757955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.758114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.758122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.603 qpair failed and we were unable to recover it. 00:33:18.603 [2024-04-17 10:29:51.758379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.758622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.758630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.603 qpair failed and we were unable to recover it. 00:33:18.603 [2024-04-17 10:29:51.758870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.759073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.759081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.603 qpair failed and we were unable to recover it. 00:33:18.603 [2024-04-17 10:29:51.759244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.759443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.759452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.603 qpair failed and we were unable to recover it. 00:33:18.603 [2024-04-17 10:29:51.759609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.759857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.759866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.603 qpair failed and we were unable to recover it. 00:33:18.603 [2024-04-17 10:29:51.760125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.760327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.760336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.603 qpair failed and we were unable to recover it. 00:33:18.603 [2024-04-17 10:29:51.760621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.760832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.760842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.603 qpair failed and we were unable to recover it. 00:33:18.603 [2024-04-17 10:29:51.761110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.761366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.761375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.603 qpair failed and we were unable to recover it. 00:33:18.603 [2024-04-17 10:29:51.761623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.761880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.761889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.603 qpair failed and we were unable to recover it. 00:33:18.603 [2024-04-17 10:29:51.762171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.762294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.762303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.603 qpair failed and we were unable to recover it. 00:33:18.603 [2024-04-17 10:29:51.762484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.762722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.762730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.603 qpair failed and we were unable to recover it. 00:33:18.603 [2024-04-17 10:29:51.762926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.763090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.763099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.603 qpair failed and we were unable to recover it. 00:33:18.603 [2024-04-17 10:29:51.763357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.763634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.603 [2024-04-17 10:29:51.763650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.603 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.763831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.763991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.763999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.764175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.764382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.764391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.764562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.764820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.764830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.765088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.765359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.765368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.765480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.765653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.765662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.765844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.766069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.766078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.766186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.766363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.766371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.766587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.766766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.766776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.766992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.767150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.767159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.767340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.767505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.767514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.767802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.767914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.767923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.768168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.768449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.768458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.768693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.768887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.768895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.769058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.769345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.769354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.769613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.769852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.769861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.770121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.770380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.770389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.770565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.770743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.770752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.771034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.771209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.771217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.771377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.771557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.771566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.771807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.772041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.772050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.772238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.772353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.772362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.772604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.772847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.772856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.773098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.773374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.773383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.773640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.773898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.773907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.774158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.774359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.774368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.774596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.774877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.774887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.775047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.775205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.775214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.775471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.775651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.775660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.604 qpair failed and we were unable to recover it. 00:33:18.604 [2024-04-17 10:29:51.775837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.604 [2024-04-17 10:29:51.776038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.776047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.776220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.776398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.776407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.776583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.776768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.776777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.776985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.777250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.777258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.777424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.777608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.777617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.777901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.778113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.778121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.778325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.778557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.778566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.778831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.779102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.779111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.779342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.779526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.779535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.779797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.780060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.780069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.780316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.780547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.780556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.780841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.781096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.781105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.781356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.781611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.781620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.781800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.782055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.782063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.782379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.782633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.782642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.782908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.783106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.783115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.783369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.783618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.783627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.783877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.784139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.784147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.784404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.784576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.784585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.784678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.784919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.784927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.785213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.785470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.785478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.785738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.785912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.785921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.786133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.786222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.786230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.786425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.786684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.786692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.786897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.787152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.787160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.787420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.787669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.787678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.787911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.788014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.788024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.788297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.788577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.788586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.605 [2024-04-17 10:29:51.788826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.789054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.605 [2024-04-17 10:29:51.789062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.605 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.789251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.789544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.789552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.789750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.789998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.790007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.790263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.790453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.790462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.790584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.790783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.790792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.790974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.791206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.791215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.791501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.791748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.791757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.791874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.792153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.792162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.792368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.792633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.792646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.792932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.793105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.793114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.793325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.793493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.793501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.793614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.793797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.793806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.793988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.794172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.794181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.794273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.794501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.794509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.794689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.794861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.794870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.794970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.795224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.795233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.795399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.795630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.795639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.795747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.796000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.796008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.796283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.796487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.796496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.796781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.797080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.797090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.797320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.797574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.797583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.797830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.798001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.798011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.798270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.798388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.798397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.798638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.798811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.798820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.799075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.799201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.799210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.799410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.799602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.799611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.799888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.800123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.800132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.800294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.800522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.800530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.800814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.800919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.800927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.606 qpair failed and we were unable to recover it. 00:33:18.606 [2024-04-17 10:29:51.801040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.606 [2024-04-17 10:29:51.801240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.801249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.801532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.801819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.801828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.801989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.802149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.802158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.802343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.802443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.802452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.802674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.802928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.802936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.803216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.803449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.803457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.803686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.803865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.803874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.804114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.804291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.804300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.804476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.804605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.804613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.804810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.804925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.804934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.805193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.805445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.805454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.805563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.805675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.805684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.805942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.806177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.806186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.806450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.806625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.806635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.806872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.807137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.807146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.807379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.807651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.807661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.807922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.808094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.808103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.808289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.808572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.808581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.808751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.808864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.808874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.809047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.809252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.809260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.809490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.809759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.809768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.810057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.810318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.810327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.810558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.810828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.810838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.811070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.811305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.811313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.811437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.811666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.811675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.811865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.812025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.812035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.812236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.812443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.812453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.812611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.812765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.812775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.813011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.813174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.607 [2024-04-17 10:29:51.813183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.607 qpair failed and we were unable to recover it. 00:33:18.607 [2024-04-17 10:29:51.813279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.813429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.813437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.608 qpair failed and we were unable to recover it. 00:33:18.608 [2024-04-17 10:29:51.813613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.813868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.813878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.608 qpair failed and we were unable to recover it. 00:33:18.608 [2024-04-17 10:29:51.814037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.814200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.814209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.608 qpair failed and we were unable to recover it. 00:33:18.608 [2024-04-17 10:29:51.814379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.814580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.814589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.608 qpair failed and we were unable to recover it. 00:33:18.608 [2024-04-17 10:29:51.814750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.814909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.814918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.608 qpair failed and we were unable to recover it. 00:33:18.608 [2024-04-17 10:29:51.815034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.815268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.815277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.608 qpair failed and we were unable to recover it. 00:33:18.608 [2024-04-17 10:29:51.815521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.815697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.815707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.608 qpair failed and we were unable to recover it. 00:33:18.608 [2024-04-17 10:29:51.815974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.816209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.816218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.608 qpair failed and we were unable to recover it. 00:33:18.608 [2024-04-17 10:29:51.816381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.816637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.816649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.608 qpair failed and we were unable to recover it. 00:33:18.608 [2024-04-17 10:29:51.816900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.817144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.817153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.608 qpair failed and we were unable to recover it. 00:33:18.608 [2024-04-17 10:29:51.817417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.817599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.817608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.608 qpair failed and we were unable to recover it. 00:33:18.608 [2024-04-17 10:29:51.817776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.817943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.817952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.608 qpair failed and we were unable to recover it. 00:33:18.608 [2024-04-17 10:29:51.818132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.818361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.818369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.608 qpair failed and we were unable to recover it. 00:33:18.608 [2024-04-17 10:29:51.818604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.818857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.818866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.608 qpair failed and we were unable to recover it. 00:33:18.608 [2024-04-17 10:29:51.819107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.819283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.819292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.608 qpair failed and we were unable to recover it. 00:33:18.608 [2024-04-17 10:29:51.819472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.819588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.819597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.608 qpair failed and we were unable to recover it. 00:33:18.608 [2024-04-17 10:29:51.819715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.819835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.819845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.608 qpair failed and we were unable to recover it. 00:33:18.608 [2024-04-17 10:29:51.820028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.820187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.820195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.608 qpair failed and we were unable to recover it. 00:33:18.608 [2024-04-17 10:29:51.820438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.820540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.820550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.608 qpair failed and we were unable to recover it. 00:33:18.608 [2024-04-17 10:29:51.820714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.820848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.820857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.608 qpair failed and we were unable to recover it. 00:33:18.608 [2024-04-17 10:29:51.821056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.821224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.821233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.608 qpair failed and we were unable to recover it. 00:33:18.608 [2024-04-17 10:29:51.821333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.821438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.821447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.608 qpair failed and we were unable to recover it. 00:33:18.608 [2024-04-17 10:29:51.821564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.821721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.821731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.608 qpair failed and we were unable to recover it. 00:33:18.608 [2024-04-17 10:29:51.821916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.822167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.608 [2024-04-17 10:29:51.822175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.822455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.822633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.822642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.822907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.823015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.823024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.823204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.823391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.823400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.823587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.823700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.823709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.823981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.824261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.824270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.824554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.824840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.824849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.825042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.825272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.825280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.825475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.825578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.825587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.825819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.826075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.826084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.826351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.826593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.826601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.826864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.827113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.827123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.827277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.827528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.827537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.827730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.827997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.828006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.828250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.828375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.828385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.828647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.828818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.828827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.829083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.829333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.829341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.829516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.829687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.829696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.829873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.830050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.830059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.830289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.830407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.830416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.830606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.830794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.830804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.831109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.831270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.831279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.831537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.831792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.831801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.832052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.832233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.832241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.832505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.832700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.832709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.832961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.833221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.833230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.833479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.833640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.833654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.833812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.834075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.834084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.834260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.834518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.609 [2024-04-17 10:29:51.834527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.609 qpair failed and we were unable to recover it. 00:33:18.609 [2024-04-17 10:29:51.834727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.834991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.835002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.835266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.835506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.835515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.835745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.836001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.836010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.836217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.836405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.836414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.836577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.836810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.836819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.837095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.837275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.837284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.837447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.837677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.837687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.837925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.838180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.838189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.838391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.838631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.838639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.838822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.839040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.839048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.839297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.839492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.839500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.839732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.839904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.839913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.840074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.840330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.840339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.840621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.840782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.840791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.841048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.841224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.841232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.841476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.841708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.841717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.841949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.842198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.842207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.842315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.842512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.842521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.842804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.843062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.843071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.843299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.843547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.843556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.843839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.844016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.844025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.844182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.844437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.844446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.844537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.844707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.844717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.844975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.845197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.845206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.845479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.845701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.845710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.845944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.846121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.846130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.846326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.846505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.846514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.846795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.846968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.846977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.610 qpair failed and we were unable to recover it. 00:33:18.610 [2024-04-17 10:29:51.847156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.847345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.610 [2024-04-17 10:29:51.847354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.847615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.847874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.847884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.848003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.848262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.848271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.848469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.848727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.848736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.848914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.849189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.849198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.849392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.849563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.849572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.849750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.850003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.850012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.850245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.850443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.850452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.850681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.850856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.850865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.851120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.851322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.851331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.851495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.851750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.851759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.851930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.852101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.852111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.852304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.852393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.852402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.852561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.852821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.852830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.853027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.853134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.853142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.853325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.853571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.853580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.853853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.853959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.853967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.854082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.854314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.854323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.854515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.854767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.854776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.854868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.855046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.855055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.855233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.855404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.855413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.855699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.855881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.855890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.856169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.856329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.856338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.856594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.856831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.856840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.857105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.857367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.857376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.857617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.857883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.857892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.858148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.858328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.858337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.858596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.858854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.858863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.859113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.859370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.859379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.611 qpair failed and we were unable to recover it. 00:33:18.611 [2024-04-17 10:29:51.859637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.611 [2024-04-17 10:29:51.859763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.859773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.612 qpair failed and we were unable to recover it. 00:33:18.612 [2024-04-17 10:29:51.860030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.860138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.860147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.612 qpair failed and we were unable to recover it. 00:33:18.612 [2024-04-17 10:29:51.860375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.860575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.860584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.612 qpair failed and we were unable to recover it. 00:33:18.612 [2024-04-17 10:29:51.860754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.861001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.861010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.612 qpair failed and we were unable to recover it. 00:33:18.612 [2024-04-17 10:29:51.861202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.861459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.861468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.612 qpair failed and we were unable to recover it. 00:33:18.612 [2024-04-17 10:29:51.861723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.861917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.861927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.612 qpair failed and we were unable to recover it. 00:33:18.612 [2024-04-17 10:29:51.862183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.862339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.862348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.612 qpair failed and we were unable to recover it. 00:33:18.612 [2024-04-17 10:29:51.862541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.862715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.862724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.612 qpair failed and we were unable to recover it. 00:33:18.612 [2024-04-17 10:29:51.862953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.863205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.863214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.612 qpair failed and we were unable to recover it. 00:33:18.612 [2024-04-17 10:29:51.863494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.863757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.863766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.612 qpair failed and we were unable to recover it. 00:33:18.612 [2024-04-17 10:29:51.863997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.864188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.864197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.612 qpair failed and we were unable to recover it. 00:33:18.612 [2024-04-17 10:29:51.864470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.864641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.864655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.612 qpair failed and we were unable to recover it. 00:33:18.612 [2024-04-17 10:29:51.864845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.865007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.865015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.612 qpair failed and we were unable to recover it. 00:33:18.612 [2024-04-17 10:29:51.865248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.865490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.865498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.612 qpair failed and we were unable to recover it. 00:33:18.612 [2024-04-17 10:29:51.865787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.865882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.865891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.612 qpair failed and we were unable to recover it. 00:33:18.612 [2024-04-17 10:29:51.866050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.866299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.866308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.612 qpair failed and we were unable to recover it. 00:33:18.612 [2024-04-17 10:29:51.866530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.866834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.866843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.612 qpair failed and we were unable to recover it. 00:33:18.612 [2024-04-17 10:29:51.867005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.867181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.867190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.612 qpair failed and we were unable to recover it. 00:33:18.612 [2024-04-17 10:29:51.867469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.867675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.867684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.612 qpair failed and we were unable to recover it. 00:33:18.612 [2024-04-17 10:29:51.867852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.868109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.868118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.612 qpair failed and we were unable to recover it. 00:33:18.612 [2024-04-17 10:29:51.868383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.868572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.868581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.612 qpair failed and we were unable to recover it. 00:33:18.612 [2024-04-17 10:29:51.868840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.869067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.869075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.612 qpair failed and we were unable to recover it. 00:33:18.612 [2024-04-17 10:29:51.869345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.869605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.869614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.612 qpair failed and we were unable to recover it. 00:33:18.612 [2024-04-17 10:29:51.869859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.870044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.870053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.612 qpair failed and we were unable to recover it. 00:33:18.612 [2024-04-17 10:29:51.870306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.870468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.870477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.612 qpair failed and we were unable to recover it. 00:33:18.612 [2024-04-17 10:29:51.870636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.870818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.870827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.612 qpair failed and we were unable to recover it. 00:33:18.612 [2024-04-17 10:29:51.870986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.871253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.871261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.612 qpair failed and we were unable to recover it. 00:33:18.612 [2024-04-17 10:29:51.871522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.871722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.612 [2024-04-17 10:29:51.871731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.613 qpair failed and we were unable to recover it. 00:33:18.613 [2024-04-17 10:29:51.871960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.872082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.872093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.613 qpair failed and we were unable to recover it. 00:33:18.613 [2024-04-17 10:29:51.872274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.872455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.872464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.613 qpair failed and we were unable to recover it. 00:33:18.613 [2024-04-17 10:29:51.872739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.872988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.872997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.613 qpair failed and we were unable to recover it. 00:33:18.613 [2024-04-17 10:29:51.873178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.873438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.873447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.613 qpair failed and we were unable to recover it. 00:33:18.613 [2024-04-17 10:29:51.873627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.873888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.873898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.613 qpair failed and we were unable to recover it. 00:33:18.613 [2024-04-17 10:29:51.874016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.874192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.874201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.613 qpair failed and we were unable to recover it. 00:33:18.613 [2024-04-17 10:29:51.874459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.874709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.874718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.613 qpair failed and we were unable to recover it. 00:33:18.613 [2024-04-17 10:29:51.874893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.875148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.875157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.613 qpair failed and we were unable to recover it. 00:33:18.613 [2024-04-17 10:29:51.875352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.875513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.875522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.613 qpair failed and we were unable to recover it. 00:33:18.613 [2024-04-17 10:29:51.875777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.876060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.876068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.613 qpair failed and we were unable to recover it. 00:33:18.613 [2024-04-17 10:29:51.876245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.876423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.876433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.613 qpair failed and we were unable to recover it. 00:33:18.613 [2024-04-17 10:29:51.876557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.876728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.876737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.613 qpair failed and we were unable to recover it. 00:33:18.613 [2024-04-17 10:29:51.876922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.877153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.877162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.613 qpair failed and we were unable to recover it. 00:33:18.613 [2024-04-17 10:29:51.877419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.877696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.877705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.613 qpair failed and we were unable to recover it. 00:33:18.613 [2024-04-17 10:29:51.877885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.878142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.878151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.613 qpair failed and we were unable to recover it. 00:33:18.613 [2024-04-17 10:29:51.878379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.878581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.878590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.613 qpair failed and we were unable to recover it. 00:33:18.613 [2024-04-17 10:29:51.878760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.879041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.879050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.613 qpair failed and we were unable to recover it. 00:33:18.613 [2024-04-17 10:29:51.879244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.879421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.879431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.613 qpair failed and we were unable to recover it. 00:33:18.613 [2024-04-17 10:29:51.879625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.879788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.879797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.613 qpair failed and we were unable to recover it. 00:33:18.613 [2024-04-17 10:29:51.879974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.880099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.880109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.613 qpair failed and we were unable to recover it. 00:33:18.613 [2024-04-17 10:29:51.880343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.880573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.880584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.613 qpair failed and we were unable to recover it. 00:33:18.613 [2024-04-17 10:29:51.880761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.880929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.880938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.613 qpair failed and we were unable to recover it. 00:33:18.613 [2024-04-17 10:29:51.881121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.881360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.881368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.613 qpair failed and we were unable to recover it. 00:33:18.613 [2024-04-17 10:29:51.881474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.881563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.881571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.613 qpair failed and we were unable to recover it. 00:33:18.613 [2024-04-17 10:29:51.881808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.881984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.613 [2024-04-17 10:29:51.881993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.613 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.882225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.882473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.882482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.882749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.882951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.882960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.883137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.883326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.883335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.883524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.883734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.883745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.883988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.884248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.884257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.884434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.884600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.884611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.884867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.885142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.885151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.885383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.885558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.885567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.885681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.885847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.885855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.886084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.886335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.886343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.886517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.886675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.886684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.886867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.887116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.887125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.887376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.887638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.887651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.887880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.888153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.888162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.888393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.888560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.888569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.888729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.888838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.888846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.889025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.889191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.889199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.889432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.889680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.889689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.889867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.890120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.890129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.890358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.890595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.890604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.890785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.891023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.891032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.891314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.891514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.891524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.891772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.891935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.891944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.892181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.892401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.892410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.892669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.892963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.892972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.893149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.893308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.893317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.893510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.893606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.893615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.893899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.894128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.894138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.614 qpair failed and we were unable to recover it. 00:33:18.614 [2024-04-17 10:29:51.894390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.894549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.614 [2024-04-17 10:29:51.894559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.894720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.894879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.894888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.895047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.895224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.895233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.895521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.895768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.895777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.896056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.896295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.896304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.896398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.896660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.896669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.896777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.896895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.896904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.897171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.897343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.897352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.897616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.897833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.897842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.898131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.898397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.898406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.898577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.898671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.898680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.898937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.899117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.899126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.899383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.899611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.899620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.899811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.899971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.899980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.900154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.900410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.900418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.900694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.900930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.900938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.901112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.901342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.901351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.901597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.901881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.901890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.902013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.902122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.902131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.902245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.902382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.902391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.902655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.902890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.902899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.903090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.903323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.903332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.903437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.903549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.903558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.903736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.903897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.903906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.904040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.904239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.904248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.904531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.904784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.904793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.904974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.905098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.905107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.905390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.905558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.905567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.615 [2024-04-17 10:29:51.905831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.905939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.615 [2024-04-17 10:29:51.905948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.615 qpair failed and we were unable to recover it. 00:33:18.889 [2024-04-17 10:29:51.906114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.889 [2024-04-17 10:29:51.906291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.889 [2024-04-17 10:29:51.906301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.889 qpair failed and we were unable to recover it. 00:33:18.889 [2024-04-17 10:29:51.906461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.889 [2024-04-17 10:29:51.906634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.889 [2024-04-17 10:29:51.906648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.889 qpair failed and we were unable to recover it. 00:33:18.889 [2024-04-17 10:29:51.906835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.889 [2024-04-17 10:29:51.907015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.889 [2024-04-17 10:29:51.907024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.889 qpair failed and we were unable to recover it. 00:33:18.889 [2024-04-17 10:29:51.907132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.889 [2024-04-17 10:29:51.907392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.907401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.890 qpair failed and we were unable to recover it. 00:33:18.890 [2024-04-17 10:29:51.907623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.907877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.907887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.890 qpair failed and we were unable to recover it. 00:33:18.890 [2024-04-17 10:29:51.908066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.908342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.908350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.890 qpair failed and we were unable to recover it. 00:33:18.890 [2024-04-17 10:29:51.908602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.908730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.908740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.890 qpair failed and we were unable to recover it. 00:33:18.890 [2024-04-17 10:29:51.908944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.909199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.909207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.890 qpair failed and we were unable to recover it. 00:33:18.890 [2024-04-17 10:29:51.909387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.909558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.909568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.890 qpair failed and we were unable to recover it. 00:33:18.890 [2024-04-17 10:29:51.909672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.909774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.909783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.890 qpair failed and we were unable to recover it. 00:33:18.890 [2024-04-17 10:29:51.909906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.910137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.910146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.890 qpair failed and we were unable to recover it. 00:33:18.890 [2024-04-17 10:29:51.910329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.910446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.910454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.890 qpair failed and we were unable to recover it. 00:33:18.890 [2024-04-17 10:29:51.910568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.910713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.910723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.890 qpair failed and we were unable to recover it. 00:33:18.890 [2024-04-17 10:29:51.910800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.910914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.910923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.890 qpair failed and we were unable to recover it. 00:33:18.890 [2024-04-17 10:29:51.911100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.911269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.911278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.890 qpair failed and we were unable to recover it. 00:33:18.890 [2024-04-17 10:29:51.911449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.911542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.911551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.890 qpair failed and we were unable to recover it. 00:33:18.890 [2024-04-17 10:29:51.911750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.911932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.911940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.890 qpair failed and we were unable to recover it. 00:33:18.890 [2024-04-17 10:29:51.912187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.912296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.912304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.890 qpair failed and we were unable to recover it. 00:33:18.890 [2024-04-17 10:29:51.912535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.912768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.912778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.890 qpair failed and we were unable to recover it. 00:33:18.890 [2024-04-17 10:29:51.912941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.913171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.913180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.890 qpair failed and we were unable to recover it. 00:33:18.890 [2024-04-17 10:29:51.913347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.913590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.913599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.890 qpair failed and we were unable to recover it. 00:33:18.890 [2024-04-17 10:29:51.913710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.913808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.913817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.890 qpair failed and we were unable to recover it. 00:33:18.890 [2024-04-17 10:29:51.914070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.914165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.914174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.890 qpair failed and we were unable to recover it. 00:33:18.890 [2024-04-17 10:29:51.914293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.914551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.914560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.890 qpair failed and we were unable to recover it. 00:33:18.890 [2024-04-17 10:29:51.914821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.915029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.915038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.890 qpair failed and we were unable to recover it. 00:33:18.890 [2024-04-17 10:29:51.915296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.915409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.915419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.890 qpair failed and we were unable to recover it. 00:33:18.890 [2024-04-17 10:29:51.915523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.915766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.915776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.890 qpair failed and we were unable to recover it. 00:33:18.890 [2024-04-17 10:29:51.915947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.916147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.916156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.890 qpair failed and we were unable to recover it. 00:33:18.890 [2024-04-17 10:29:51.916354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.916438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.916447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.890 qpair failed and we were unable to recover it. 00:33:18.890 [2024-04-17 10:29:51.916610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.890 [2024-04-17 10:29:51.916789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.916798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.891 qpair failed and we were unable to recover it. 00:33:18.891 [2024-04-17 10:29:51.916987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.917104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.917113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.891 qpair failed and we were unable to recover it. 00:33:18.891 [2024-04-17 10:29:51.917310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.917433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.917442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.891 qpair failed and we were unable to recover it. 00:33:18.891 [2024-04-17 10:29:51.917726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.917901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.917910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.891 qpair failed and we were unable to recover it. 00:33:18.891 [2024-04-17 10:29:51.918075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.918246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.918255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.891 qpair failed and we were unable to recover it. 00:33:18.891 [2024-04-17 10:29:51.918441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.918701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.918710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.891 qpair failed and we were unable to recover it. 00:33:18.891 [2024-04-17 10:29:51.918896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.919123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.919131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.891 qpair failed and we were unable to recover it. 00:33:18.891 [2024-04-17 10:29:51.919330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.919551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.919560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.891 qpair failed and we were unable to recover it. 00:33:18.891 [2024-04-17 10:29:51.919751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.919980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.919989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.891 qpair failed and we were unable to recover it. 00:33:18.891 [2024-04-17 10:29:51.920085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.920337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.920346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.891 qpair failed and we were unable to recover it. 00:33:18.891 [2024-04-17 10:29:51.920505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.920610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.920619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.891 qpair failed and we were unable to recover it. 00:33:18.891 [2024-04-17 10:29:51.920885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.921053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.921062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.891 qpair failed and we were unable to recover it. 00:33:18.891 [2024-04-17 10:29:51.921189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.921383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.921392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.891 qpair failed and we were unable to recover it. 00:33:18.891 [2024-04-17 10:29:51.921584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.921743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.921752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.891 qpair failed and we were unable to recover it. 00:33:18.891 [2024-04-17 10:29:51.921923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.922178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.922186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.891 qpair failed and we were unable to recover it. 00:33:18.891 [2024-04-17 10:29:51.922394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.922570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.922579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.891 qpair failed and we were unable to recover it. 00:33:18.891 [2024-04-17 10:29:51.922674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.922793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.922801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.891 qpair failed and we were unable to recover it. 00:33:18.891 [2024-04-17 10:29:51.922962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.923148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.923156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.891 qpair failed and we were unable to recover it. 00:33:18.891 [2024-04-17 10:29:51.923343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.923445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.923454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.891 qpair failed and we were unable to recover it. 00:33:18.891 [2024-04-17 10:29:51.923698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.923930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.923939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.891 qpair failed and we were unable to recover it. 00:33:18.891 [2024-04-17 10:29:51.924101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.924360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.924369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.891 qpair failed and we were unable to recover it. 00:33:18.891 [2024-04-17 10:29:51.924548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.924754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.924763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.891 qpair failed and we were unable to recover it. 00:33:18.891 [2024-04-17 10:29:51.924925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.925052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.891 [2024-04-17 10:29:51.925060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.891 qpair failed and we were unable to recover it. 00:33:18.891 [2024-04-17 10:29:51.925239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.925494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.925503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.892 qpair failed and we were unable to recover it. 00:33:18.892 [2024-04-17 10:29:51.925662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.925780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.925788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.892 qpair failed and we were unable to recover it. 00:33:18.892 [2024-04-17 10:29:51.925909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.926090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.926099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.892 qpair failed and we were unable to recover it. 00:33:18.892 [2024-04-17 10:29:51.926197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.926307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.926315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.892 qpair failed and we were unable to recover it. 00:33:18.892 [2024-04-17 10:29:51.926477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.926652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.926661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.892 qpair failed and we were unable to recover it. 00:33:18.892 [2024-04-17 10:29:51.926766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.927004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.927012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.892 qpair failed and we were unable to recover it. 00:33:18.892 [2024-04-17 10:29:51.927246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.927501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.927510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.892 qpair failed and we were unable to recover it. 00:33:18.892 [2024-04-17 10:29:51.927687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.927816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.927825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.892 qpair failed and we were unable to recover it. 00:33:18.892 [2024-04-17 10:29:51.928109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.928199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.928207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.892 qpair failed and we were unable to recover it. 00:33:18.892 [2024-04-17 10:29:51.928387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.928504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.928513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.892 qpair failed and we were unable to recover it. 00:33:18.892 [2024-04-17 10:29:51.928788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.929015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.929024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.892 qpair failed and we were unable to recover it. 00:33:18.892 [2024-04-17 10:29:51.929128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.929248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.929258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.892 qpair failed and we were unable to recover it. 00:33:18.892 [2024-04-17 10:29:51.929483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.929712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.929721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.892 qpair failed and we were unable to recover it. 00:33:18.892 [2024-04-17 10:29:51.929918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.930117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.930126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.892 qpair failed and we were unable to recover it. 00:33:18.892 [2024-04-17 10:29:51.930411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.930528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.930537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.892 qpair failed and we were unable to recover it. 00:33:18.892 [2024-04-17 10:29:51.930655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.930900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.930909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.892 qpair failed and we were unable to recover it. 00:33:18.892 [2024-04-17 10:29:51.931107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.931388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.931397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.892 qpair failed and we were unable to recover it. 00:33:18.892 [2024-04-17 10:29:51.931655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.931835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.931844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.892 qpair failed and we were unable to recover it. 00:33:18.892 [2024-04-17 10:29:51.932036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.932204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.932213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.892 qpair failed and we were unable to recover it. 00:33:18.892 [2024-04-17 10:29:51.932337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.932566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.932574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.892 qpair failed and we were unable to recover it. 00:33:18.892 [2024-04-17 10:29:51.932734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.932911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.932919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.892 qpair failed and we were unable to recover it. 00:33:18.892 [2024-04-17 10:29:51.933117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.933370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.933379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.892 qpair failed and we were unable to recover it. 00:33:18.892 [2024-04-17 10:29:51.933618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.933721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.933730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.892 qpair failed and we were unable to recover it. 00:33:18.892 [2024-04-17 10:29:51.933908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.933995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.934004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.892 qpair failed and we were unable to recover it. 00:33:18.892 [2024-04-17 10:29:51.934243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.934539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.934548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.892 qpair failed and we were unable to recover it. 00:33:18.892 [2024-04-17 10:29:51.934653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.934738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.892 [2024-04-17 10:29:51.934747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.892 qpair failed and we were unable to recover it. 00:33:18.892 [2024-04-17 10:29:51.934919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.935021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.935030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.935303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.935482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.935492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.935682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.935864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.935873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.936000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.936161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.936170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.936427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.936601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.936610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.936786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.936941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.936950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.937051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.937303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.937312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.937543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.937662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.937671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.937845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.937962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.937971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.938092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.938275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.938284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.938456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.938703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.938712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.938848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.939034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.939044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.939204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.939379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.939388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.939509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.939673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.939683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.939775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.940054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.940063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.940173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.940335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.940343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.940593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.940710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.940720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.940821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.941020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.941029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.941226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.941325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.941334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.941436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.941664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.941673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.941849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.942100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.942109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.942279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.942439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.942450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.942713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.942886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.942894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.943053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.943237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.943245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.943443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.943698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.943707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.944020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.944191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.944199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.944357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.944635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.944647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.944767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.944927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.944936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.893 qpair failed and we were unable to recover it. 00:33:18.893 [2024-04-17 10:29:51.945109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.893 [2024-04-17 10:29:51.945279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.945288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.894 qpair failed and we were unable to recover it. 00:33:18.894 [2024-04-17 10:29:51.945464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.945691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.945700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.894 qpair failed and we were unable to recover it. 00:33:18.894 [2024-04-17 10:29:51.945930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.946139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.946147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.894 qpair failed and we were unable to recover it. 00:33:18.894 [2024-04-17 10:29:51.946302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.946421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.946431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.894 qpair failed and we were unable to recover it. 00:33:18.894 [2024-04-17 10:29:51.946523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.946703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.946712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.894 qpair failed and we were unable to recover it. 00:33:18.894 [2024-04-17 10:29:51.946804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.946912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.946921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.894 qpair failed and we were unable to recover it. 00:33:18.894 [2024-04-17 10:29:51.947123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.947229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.947238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.894 qpair failed and we were unable to recover it. 00:33:18.894 [2024-04-17 10:29:51.947414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.947670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.947679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.894 qpair failed and we were unable to recover it. 00:33:18.894 [2024-04-17 10:29:51.947768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.947946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.947954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.894 qpair failed and we were unable to recover it. 00:33:18.894 [2024-04-17 10:29:51.948126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.948268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.948277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.894 qpair failed and we were unable to recover it. 00:33:18.894 [2024-04-17 10:29:51.948359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.948434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.948442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.894 qpair failed and we were unable to recover it. 00:33:18.894 [2024-04-17 10:29:51.948610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.948770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.948779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.894 qpair failed and we were unable to recover it. 00:33:18.894 [2024-04-17 10:29:51.948938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.949141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.949150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.894 qpair failed and we were unable to recover it. 00:33:18.894 [2024-04-17 10:29:51.949263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.949365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.949374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.894 qpair failed and we were unable to recover it. 00:33:18.894 [2024-04-17 10:29:51.949635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.949756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.949766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.894 qpair failed and we were unable to recover it. 00:33:18.894 [2024-04-17 10:29:51.949939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.950131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.950141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.894 qpair failed and we were unable to recover it. 00:33:18.894 [2024-04-17 10:29:51.950234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.950406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.950415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.894 qpair failed and we were unable to recover it. 00:33:18.894 [2024-04-17 10:29:51.950517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.950696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.950706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.894 qpair failed and we were unable to recover it. 00:33:18.894 [2024-04-17 10:29:51.950901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.951089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.951099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.894 qpair failed and we were unable to recover it. 00:33:18.894 [2024-04-17 10:29:51.951264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.951434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.951443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.894 qpair failed and we were unable to recover it. 00:33:18.894 [2024-04-17 10:29:51.951675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.951845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.951855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.894 qpair failed and we were unable to recover it. 00:33:18.894 [2024-04-17 10:29:51.951979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.952169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.952178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.894 qpair failed and we were unable to recover it. 00:33:18.894 [2024-04-17 10:29:51.952351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.952509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.894 [2024-04-17 10:29:51.952518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.952679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.952856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.952865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.953038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.953294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.953302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.953534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.953704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.953713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.953830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.954023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.954032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.954208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.954410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.954419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.954651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.954772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.954781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.954944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.955111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.955119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.955278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.955394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.955403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.955576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.955673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.955682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.955840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.956000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.956009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.956195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.956318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.956327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.956494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.956672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.956681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.956801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.956900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.956909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.956999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.957185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.957194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.957303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.957467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.957477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.957577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.957756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.957765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.958022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.958277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.958286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.958448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.958622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.958631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.958865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.959118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.959127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.959314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.959543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.959552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.959784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.960033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.960042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.960256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.960422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.960431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.960627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.960758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.960768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.961015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.961193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.961202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.961400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.961648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.961658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.961843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.962098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.962107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.962345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.962453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.962462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.895 qpair failed and we were unable to recover it. 00:33:18.895 [2024-04-17 10:29:51.962563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.962681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.895 [2024-04-17 10:29:51.962691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.962812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.962900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.962909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.963114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.963233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.963242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.963419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.963518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.963527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.963723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.963901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.963910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.964030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.964144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.964153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.964271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.964535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.964544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.964774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.964936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.964945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.965142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.965309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.965318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.965481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.965586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.965595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.965715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.965842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.965850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.966016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.966270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.966278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.966475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.966709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.966718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.966890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.967085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.967094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.967274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.967503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.967511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.967742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.967919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.967928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.968165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.968398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.968407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.968517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.968631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.968640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.968754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.969024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.969033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.969339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.969567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.969575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.969741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.969915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.969924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.970099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.970302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.970311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.970477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.970724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.970734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.970831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.970931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.970940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.971129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.971288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.971297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.971458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.971570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.971579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.971810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.971987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.971996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.972160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.972330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.972338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.972498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.972776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.972785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.896 qpair failed and we were unable to recover it. 00:33:18.896 [2024-04-17 10:29:51.972892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.896 [2024-04-17 10:29:51.973120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.973130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.973221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.973381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.973391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.973628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.973806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.973816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.974071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.974187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.974196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.974401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.974574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.974582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.974814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.974903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.974912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.975091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.975201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.975210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.975458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.975568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.975577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.975751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.975922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.975931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.976122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.976242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.976251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.976369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.976526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.976535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.976727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.976906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.976916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.977085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.977186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.977195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.977366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.977471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.977480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.977599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.977792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.977801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.977900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.978005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.978014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.978190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.978306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.978315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.978572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.978800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.978809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.979065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.979270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.979279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.979477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.979661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.979670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.979852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.979969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.979978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.980220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.980448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.980457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.980620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.980740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.980749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.981008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.981216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.981224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.981386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.981508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.981516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.981678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.981804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.981812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.982041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.982219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.982228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.982487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.982650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.982659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.897 qpair failed and we were unable to recover it. 00:33:18.897 [2024-04-17 10:29:51.982831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.982933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.897 [2024-04-17 10:29:51.982941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.983133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.983294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.983303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.983489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.983601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.983609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.983784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.983964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.983973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.984152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.984327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.984336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.984517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.984608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.984617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.984818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.984996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.985005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.985203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.985456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.985465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.985580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.985670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.985679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.985818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.985982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.985991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.986260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.986376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.986385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.986609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.986816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.986825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.987024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.987140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.987149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.987344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.987440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.987449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.987706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.987827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.987835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.988039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.988266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.988275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.988505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.988622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.988630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.988804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.988974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.988984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.989188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.989299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.989308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.989431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.989593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.989602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.989782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.989888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.989896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.990088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.990283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.990292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.990465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.990561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.990569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.990730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.990848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.990858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.991033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.991140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.991160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.991268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.991370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.991379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.991551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.991779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.991789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.992041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.992134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.992145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.992259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.992356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.992365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.898 qpair failed and we were unable to recover it. 00:33:18.898 [2024-04-17 10:29:51.992647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.898 [2024-04-17 10:29:51.992847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.992856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.899 qpair failed and we were unable to recover it. 00:33:18.899 [2024-04-17 10:29:51.993030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.993127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.993136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.899 qpair failed and we were unable to recover it. 00:33:18.899 [2024-04-17 10:29:51.993322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.993429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.993438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.899 qpair failed and we were unable to recover it. 00:33:18.899 [2024-04-17 10:29:51.993750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.993864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.993873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.899 qpair failed and we were unable to recover it. 00:33:18.899 [2024-04-17 10:29:51.994042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.994241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.994250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.899 qpair failed and we were unable to recover it. 00:33:18.899 [2024-04-17 10:29:51.994358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.994514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.994523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.899 qpair failed and we were unable to recover it. 00:33:18.899 [2024-04-17 10:29:51.994752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.994863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.994872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.899 qpair failed and we were unable to recover it. 00:33:18.899 [2024-04-17 10:29:51.995065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.995325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.995334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.899 qpair failed and we were unable to recover it. 00:33:18.899 [2024-04-17 10:29:51.995522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.995697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.995708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.899 qpair failed and we were unable to recover it. 00:33:18.899 [2024-04-17 10:29:51.995913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.996085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.996094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.899 qpair failed and we were unable to recover it. 00:33:18.899 [2024-04-17 10:29:51.996257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.996355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.996363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.899 qpair failed and we were unable to recover it. 00:33:18.899 [2024-04-17 10:29:51.996526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.996727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.996736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.899 qpair failed and we were unable to recover it. 00:33:18.899 [2024-04-17 10:29:51.997017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.997186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.997195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.899 qpair failed and we were unable to recover it. 00:33:18.899 [2024-04-17 10:29:51.997390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.997571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.997580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.899 qpair failed and we were unable to recover it. 00:33:18.899 [2024-04-17 10:29:51.997754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.998012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.998021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.899 qpair failed and we were unable to recover it. 00:33:18.899 [2024-04-17 10:29:51.998184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.998292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.998300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.899 qpair failed and we were unable to recover it. 00:33:18.899 [2024-04-17 10:29:51.998486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.998728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.998737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.899 qpair failed and we were unable to recover it. 00:33:18.899 [2024-04-17 10:29:51.998847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.999031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.999040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.899 qpair failed and we were unable to recover it. 00:33:18.899 [2024-04-17 10:29:51.999159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.999379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.999390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.899 qpair failed and we were unable to recover it. 00:33:18.899 [2024-04-17 10:29:51.999583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.999839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:51.999848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.899 qpair failed and we were unable to recover it. 00:33:18.899 [2024-04-17 10:29:52.000096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:52.000268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:52.000276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.899 qpair failed and we were unable to recover it. 00:33:18.899 [2024-04-17 10:29:52.000368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:52.000533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:52.000542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.899 qpair failed and we were unable to recover it. 00:33:18.899 [2024-04-17 10:29:52.000652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:52.000912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.899 [2024-04-17 10:29:52.000921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.899 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.001086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.001186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.001194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.001372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.001629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.001638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.001875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.001976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.001986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.002243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.002439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.002448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.002671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.002779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.002788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.002892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.003046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.003056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.003163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.003333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.003342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.003630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.003747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.003756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.003856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.004032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.004040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.004141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.004240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.004248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.004477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.004735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.004744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.004852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.005084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.005093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.005198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.005320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.005330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.005574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.005691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.005700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.005874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.005983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.005992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.006103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.006347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.006356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.006452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.006679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.006689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.006851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.006952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.006961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.007144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.007259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.007268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.007364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.007469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.007478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.007583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.007752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.007762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.008019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.008192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.008201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.008370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.008469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.008478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.008586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.008702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.008711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.008923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.009097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.009106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.009300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.009508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.009517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.009696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.009873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.009882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.010058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.010178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.010186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.900 qpair failed and we were unable to recover it. 00:33:18.900 [2024-04-17 10:29:52.010398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.900 [2024-04-17 10:29:52.010636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.010649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.010758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.010939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.010948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.011178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.011300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.011309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.011506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.011764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.011773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.012003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.012177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.012186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.012307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.012481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.012490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.012720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.012922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.012931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.013036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.013149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.013158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.013334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.013506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.013515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.013780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.013951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.013960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.014069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.014243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.014252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.014439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.014598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.014608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.014784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.015011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.015021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.015280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.015508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.015517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.015620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.015791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.015801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.016000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.016238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.016247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.016430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.016631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.016639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.016762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.016991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.017000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.017232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.017413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.017422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.017606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.017875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.017885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.018158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.018261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.018270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.018514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.018743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.018753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.018947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.019204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.019213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.019326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.019507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.019517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.019623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.019876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.019886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.020004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.020171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.020181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.020422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.020587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.020596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.020772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.020953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.020963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.021140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.021306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.901 [2024-04-17 10:29:52.021316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.901 qpair failed and we were unable to recover it. 00:33:18.901 [2024-04-17 10:29:52.021477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.021641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.021654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.021818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.022061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.022070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.022269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.022427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.022437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.022616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.022899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.022909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.023030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.023198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.023208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.023373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.023495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.023504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.023671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.023842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.023851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.024108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.024227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.024237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.024399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.024574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.024583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.024770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.024938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.024947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.025131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.025230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.025240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.025417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.025672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.025681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.025784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.025892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.025903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.026084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.026245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.026254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.026464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.026694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.026705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.026810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.027007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.027017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.027180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.027338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.027347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.027550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.027751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.027761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.027935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.028174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.028184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.028448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.028567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.028577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.028683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.028808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.028817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.028997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.029105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.029115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.029219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.029380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.029389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.029518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.029682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.029692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.029850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.029954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.029963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.030136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.030365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.030374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.030484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.030653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.030662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.030903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.031106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.031115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.902 qpair failed and we were unable to recover it. 00:33:18.902 [2024-04-17 10:29:52.031321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.031498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.902 [2024-04-17 10:29:52.031507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.031624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.031725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.031735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.031932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.032182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.032191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.032351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.032430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.032439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.032601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.032760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.032769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.032930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.033020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.033029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.033199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.033326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.033335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.033499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.033680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.033689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.033879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.034089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.034098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.034288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.034458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.034467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.034703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.034819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.034829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.035009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.035194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.035203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.035369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.035547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.035557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.035751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.035927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.035937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.036100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.036258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.036267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.036440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.036552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.036562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.036641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.036881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.036890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.037119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.037277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.037286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.037403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.037638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.037651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.037879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.037996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.038005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.038170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.038268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.038278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.038441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.038615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.038624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.038854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.038958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.038966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.039142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.039241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.039250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.039379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.039559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.039568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.039732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.039915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.039924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.040028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.040225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.040234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.040406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.040579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.040588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.040751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.040938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.040948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.903 qpair failed and we were unable to recover it. 00:33:18.903 [2024-04-17 10:29:52.041075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.903 [2024-04-17 10:29:52.041195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.041205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.904 qpair failed and we were unable to recover it. 00:33:18.904 [2024-04-17 10:29:52.041381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.041540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.041549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.904 qpair failed and we were unable to recover it. 00:33:18.904 [2024-04-17 10:29:52.041716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.041811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.041821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.904 qpair failed and we were unable to recover it. 00:33:18.904 [2024-04-17 10:29:52.041987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.042108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.042118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.904 qpair failed and we were unable to recover it. 00:33:18.904 [2024-04-17 10:29:52.042285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.042403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.042412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.904 qpair failed and we were unable to recover it. 00:33:18.904 [2024-04-17 10:29:52.042514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.042689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.042699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.904 qpair failed and we were unable to recover it. 00:33:18.904 [2024-04-17 10:29:52.042890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.043060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.043069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.904 qpair failed and we were unable to recover it. 00:33:18.904 [2024-04-17 10:29:52.043227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.043329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.043339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.904 qpair failed and we were unable to recover it. 00:33:18.904 [2024-04-17 10:29:52.043540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.043657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.043667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.904 qpair failed and we were unable to recover it. 00:33:18.904 [2024-04-17 10:29:52.043763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.044022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.044031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.904 qpair failed and we were unable to recover it. 00:33:18.904 [2024-04-17 10:29:52.044276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.044509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.044518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.904 qpair failed and we were unable to recover it. 00:33:18.904 [2024-04-17 10:29:52.044633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.044921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.044930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.904 qpair failed and we were unable to recover it. 00:33:18.904 [2024-04-17 10:29:52.045037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.045180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.045191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.904 qpair failed and we were unable to recover it. 00:33:18.904 [2024-04-17 10:29:52.045444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.045621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.045630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.904 qpair failed and we were unable to recover it. 00:33:18.904 [2024-04-17 10:29:52.045743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.045953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.045962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.904 qpair failed and we were unable to recover it. 00:33:18.904 [2024-04-17 10:29:52.046191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.046427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.046437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.904 qpair failed and we were unable to recover it. 00:33:18.904 [2024-04-17 10:29:52.046619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.046795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.046806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.904 qpair failed and we were unable to recover it. 00:33:18.904 [2024-04-17 10:29:52.046979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.047103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.047113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.904 qpair failed and we were unable to recover it. 00:33:18.904 [2024-04-17 10:29:52.047349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.047576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.047586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.904 qpair failed and we were unable to recover it. 00:33:18.904 [2024-04-17 10:29:52.047748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.047910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.047919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.904 qpair failed and we were unable to recover it. 00:33:18.904 [2024-04-17 10:29:52.048029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.048225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.048234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.904 qpair failed and we were unable to recover it. 00:33:18.904 [2024-04-17 10:29:52.048425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.048595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.048604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.904 qpair failed and we were unable to recover it. 00:33:18.904 [2024-04-17 10:29:52.048796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.048959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.048969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.904 qpair failed and we were unable to recover it. 00:33:18.904 [2024-04-17 10:29:52.049076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.049234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.904 [2024-04-17 10:29:52.049243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.904 qpair failed and we were unable to recover it. 00:33:18.904 [2024-04-17 10:29:52.049435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.049611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.049620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.049741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.049976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.049986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.050098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.050200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.050209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.050395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.050684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.050694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.050856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.051044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.051054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.051281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.051444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.051453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.051714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.051889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.051898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.052139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.052319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.052329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.052450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.052611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.052622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.052734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.052841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.052850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.052969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.053168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.053177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.053351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.053492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.053501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.053664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.053822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.053831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.053937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.054221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.054231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.054492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.054606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.054615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.054708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.054814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.054823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.055049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.055246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.055255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.055419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.055580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.055589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.055760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.056012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.056023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.056133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.056314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.056324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.056427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.056598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.056607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.056839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.056928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.056938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.057172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.057330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.057340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.057451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.057652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.057664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.057855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.058011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.058021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.058183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.058355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.058366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.058559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.058661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.058671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.058781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.058941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.058951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.905 [2024-04-17 10:29:52.059071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.059188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.905 [2024-04-17 10:29:52.059198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.905 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.059384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.059494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.059503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.059617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.059776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.059786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.059894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.060052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.060062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.060238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.060435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.060444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.060677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.060838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.060847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.061023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.061187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.061196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.061424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.061653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.061663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.061892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.062052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.062061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.062316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.062407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.062416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.062620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.062811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.062821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.062985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.063145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.063155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.063260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.063449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.063458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.063638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.063921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.063931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.064152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.064332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.064342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.064432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.064539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.064548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.064667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.064772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.064782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.064950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.065052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.065061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.065225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.065415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.065424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.065541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.065753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.065763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.065922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.066095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.066104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.066277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.066436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.066445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.066553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.066651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.066661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.066928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.067174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.067183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.067329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.067557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.067566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.067775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.067936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.067945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.068128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.068312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.068321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.068569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.068797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.068807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.068978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.069149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.069159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.906 qpair failed and we were unable to recover it. 00:33:18.906 [2024-04-17 10:29:52.069399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.906 [2024-04-17 10:29:52.069504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.069513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.069681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.069860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.069870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.070077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.070238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.070247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.070343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.070453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.070462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.070569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.070667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.070676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.070960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.071134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.071144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.071373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.071601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.071610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.071781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.071982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.071991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.072195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.072294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.072303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.072398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.072572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.072581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.072674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.072841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.072851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.073112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.073290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.073300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.073537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.073764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.073774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.073981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.074142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.074151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.074384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.074471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.074480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.074744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.074950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.074960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.075144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.075351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.075360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.075594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.075804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.075814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.075973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.076231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.076240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.076522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.076632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.076645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.076899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.077154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.077164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.077335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.077429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.077438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.077628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.077736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.077746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.077999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.078179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.078188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.078348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.078603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.078612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.078820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.078986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.078995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.079201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.079358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.079368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.079478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.079688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.079698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.079809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.079912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.907 [2024-04-17 10:29:52.079921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.907 qpair failed and we were unable to recover it. 00:33:18.907 [2024-04-17 10:29:52.080090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.080258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.080267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.908 qpair failed and we were unable to recover it. 00:33:18.908 [2024-04-17 10:29:52.080369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.080507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.080516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.908 qpair failed and we were unable to recover it. 00:33:18.908 [2024-04-17 10:29:52.080661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.080768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.080777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.908 qpair failed and we were unable to recover it. 00:33:18.908 [2024-04-17 10:29:52.080937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.081028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.081037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.908 qpair failed and we were unable to recover it. 00:33:18.908 [2024-04-17 10:29:52.081141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.081341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.081350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.908 qpair failed and we were unable to recover it. 00:33:18.908 [2024-04-17 10:29:52.081524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.081691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.081701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.908 qpair failed and we were unable to recover it. 00:33:18.908 [2024-04-17 10:29:52.081817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.081984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.081993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.908 qpair failed and we were unable to recover it. 00:33:18.908 [2024-04-17 10:29:52.082232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.082417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.082426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.908 qpair failed and we were unable to recover it. 00:33:18.908 [2024-04-17 10:29:52.082657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.082860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.082870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.908 qpair failed and we were unable to recover it. 00:33:18.908 [2024-04-17 10:29:52.083137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.083385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.083394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.908 qpair failed and we were unable to recover it. 00:33:18.908 [2024-04-17 10:29:52.083567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.083768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.083778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.908 qpair failed and we were unable to recover it. 00:33:18.908 [2024-04-17 10:29:52.084037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.084127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.084137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.908 qpair failed and we were unable to recover it. 00:33:18.908 [2024-04-17 10:29:52.084367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.084651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.084660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.908 qpair failed and we were unable to recover it. 00:33:18.908 [2024-04-17 10:29:52.084824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.085004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.085013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.908 qpair failed and we were unable to recover it. 00:33:18.908 [2024-04-17 10:29:52.085281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.085388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.085398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.908 qpair failed and we were unable to recover it. 00:33:18.908 [2024-04-17 10:29:52.085506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.085759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.085768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.908 qpair failed and we were unable to recover it. 00:33:18.908 [2024-04-17 10:29:52.085868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.086033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.086043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.908 qpair failed and we were unable to recover it. 00:33:18.908 [2024-04-17 10:29:52.086216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.086470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.086480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.908 qpair failed and we were unable to recover it. 00:33:18.908 [2024-04-17 10:29:52.086585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.086783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.086793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.908 qpair failed and we were unable to recover it. 00:33:18.908 [2024-04-17 10:29:52.086974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.087080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.087089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.908 qpair failed and we were unable to recover it. 00:33:18.908 [2024-04-17 10:29:52.087371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.087443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.087452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.908 qpair failed and we were unable to recover it. 00:33:18.908 [2024-04-17 10:29:52.087634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.087819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.087828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.908 qpair failed and we were unable to recover it. 00:33:18.908 [2024-04-17 10:29:52.088005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.088174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.088184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.908 qpair failed and we were unable to recover it. 00:33:18.908 [2024-04-17 10:29:52.088435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.088635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.908 [2024-04-17 10:29:52.088654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.908 qpair failed and we were unable to recover it. 00:33:18.908 [2024-04-17 10:29:52.088843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.089070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.089079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.089252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.089455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.089465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.089649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.089744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.089754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.089917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.090075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.090084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.090244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.090414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.090424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.090525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.090770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.090781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.090957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.091115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.091124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.091354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.091523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.091532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.091633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.091756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.091766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.091940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.092135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.092144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.092246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.092407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.092416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.092512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.092788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.092798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.092973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.093247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.093256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.093499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.093758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.093768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.093888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.093995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.094004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.094190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.094367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.094376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.094556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.094669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.094678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.094780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.094886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.094895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.095103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.095275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.095284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.095459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.095545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.095554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.095724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.095883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.095892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.096071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.096247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.096256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.096431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.096588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.096596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.096790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.096975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.096984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.097108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.097209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.097217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.097466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.097696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.097706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.097824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.098005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.098014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.098133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.098365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.098374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.909 [2024-04-17 10:29:52.098553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.098798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.909 [2024-04-17 10:29:52.098807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.909 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.098967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.099095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.099104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.099383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.099628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.099637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.099839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.100013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.100022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.100277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.100531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.100540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.100703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.100886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.100896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.101159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.101375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.101384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.101682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.101856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.101865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.102150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.102404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.102413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.102646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.102737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.102746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.103007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.103166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.103175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.103402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.103602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.103612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.103854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.104133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.104141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.104397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.104657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.104666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.104910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.105085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.105094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.105333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.105492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.105501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.105751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.105930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.105938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.106212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.106320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.106329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.106509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.106787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.106796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.107060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.107175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.107184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.107362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.107605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.107614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.107776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.108043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.108053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.108240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.108476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.108484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.108651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.108836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.108845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.109041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.109276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.109284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.109542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.109796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.109806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.110087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.110359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.110367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.110604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.110854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.110863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.111139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.111378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.910 [2024-04-17 10:29:52.111386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.910 qpair failed and we were unable to recover it. 00:33:18.910 [2024-04-17 10:29:52.111548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.111705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.111714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.111974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.112220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.112229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.112349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.112521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.112531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.112762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.112926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.112934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.113096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.113385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.113394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.113568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.113745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.113755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.113865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.114121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.114130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.114292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.114532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.114541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.114777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.115049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.115058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.115218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.115378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.115387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.115624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.115747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.115756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.115877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.116132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.116141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.116330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.116559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.116571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.116821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.116994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.117003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.117262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.117442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.117451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.117654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.117907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.117917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.118174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.118421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.118430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.118716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.118891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.118900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.119185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.119412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.119420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.119724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.119985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.119994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.120156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.120310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.120319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.120499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.120657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.120666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.120829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.121084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.121093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.121350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.121514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.121522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.121722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.121883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.121892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.122050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.122298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.122307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.122494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.122769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.122779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.123034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.123191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.123200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.911 [2024-04-17 10:29:52.123449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.123711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.911 [2024-04-17 10:29:52.123720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.911 qpair failed and we were unable to recover it. 00:33:18.912 [2024-04-17 10:29:52.123958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.124219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.124228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.912 qpair failed and we were unable to recover it. 00:33:18.912 [2024-04-17 10:29:52.124491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.124756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.124765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.912 qpair failed and we were unable to recover it. 00:33:18.912 [2024-04-17 10:29:52.124943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.125200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.125209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.912 qpair failed and we were unable to recover it. 00:33:18.912 [2024-04-17 10:29:52.125417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.125662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.125671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.912 qpair failed and we were unable to recover it. 00:33:18.912 [2024-04-17 10:29:52.125904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.126089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.126098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.912 qpair failed and we were unable to recover it. 00:33:18.912 [2024-04-17 10:29:52.126300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.126593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.126602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.912 qpair failed and we were unable to recover it. 00:33:18.912 [2024-04-17 10:29:52.126765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.126965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.126974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.912 qpair failed and we were unable to recover it. 00:33:18.912 [2024-04-17 10:29:52.127079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.127265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.127274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.912 qpair failed and we were unable to recover it. 00:33:18.912 [2024-04-17 10:29:52.127375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.127589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.127598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.912 qpair failed and we were unable to recover it. 00:33:18.912 [2024-04-17 10:29:52.127759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.127952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.127961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.912 qpair failed and we were unable to recover it. 00:33:18.912 [2024-04-17 10:29:52.128207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.128435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.128443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.912 qpair failed and we were unable to recover it. 00:33:18.912 [2024-04-17 10:29:52.128661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.128850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.128860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.912 qpair failed and we were unable to recover it. 00:33:18.912 [2024-04-17 10:29:52.128950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.129110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.129119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.912 qpair failed and we were unable to recover it. 00:33:18.912 [2024-04-17 10:29:52.129227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.129483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.129492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.912 qpair failed and we were unable to recover it. 00:33:18.912 [2024-04-17 10:29:52.129746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.129866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.129875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.912 qpair failed and we were unable to recover it. 00:33:18.912 [2024-04-17 10:29:52.130049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.130177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.130186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.912 qpair failed and we were unable to recover it. 00:33:18.912 [2024-04-17 10:29:52.130363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.130557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.130565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.912 qpair failed and we were unable to recover it. 00:33:18.912 10:29:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:18.912 [2024-04-17 10:29:52.130827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.131021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.131031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.912 qpair failed and we were unable to recover it. 00:33:18.912 10:29:52 -- common/autotest_common.sh@852 -- # return 0 00:33:18.912 [2024-04-17 10:29:52.131299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.131404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.131413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.912 qpair failed and we were unable to recover it. 00:33:18.912 10:29:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:18.912 [2024-04-17 10:29:52.131651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 10:29:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:18.912 [2024-04-17 10:29:52.131873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.131883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.912 qpair failed and we were unable to recover it. 00:33:18.912 10:29:52 -- common/autotest_common.sh@10 -- # set +x 00:33:18.912 [2024-04-17 10:29:52.132058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.132217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.132225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.912 qpair failed and we were unable to recover it. 00:33:18.912 [2024-04-17 10:29:52.132481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.132660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.132669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.912 qpair failed and we were unable to recover it. 00:33:18.912 [2024-04-17 10:29:52.132849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.133082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.133090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.912 qpair failed and we were unable to recover it. 00:33:18.912 [2024-04-17 10:29:52.133305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.133466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.133475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.912 qpair failed and we were unable to recover it. 00:33:18.912 [2024-04-17 10:29:52.133647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.133876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.133885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.912 qpair failed and we were unable to recover it. 00:33:18.912 [2024-04-17 10:29:52.134046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.134302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.134311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.912 qpair failed and we were unable to recover it. 00:33:18.912 [2024-04-17 10:29:52.134420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.134588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.134597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.912 qpair failed and we were unable to recover it. 00:33:18.912 [2024-04-17 10:29:52.134831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.912 [2024-04-17 10:29:52.135003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.135012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.913 qpair failed and we were unable to recover it. 00:33:18.913 [2024-04-17 10:29:52.135274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.135471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.135480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.913 qpair failed and we were unable to recover it. 00:33:18.913 [2024-04-17 10:29:52.135787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.135893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.135901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.913 qpair failed and we were unable to recover it. 00:33:18.913 [2024-04-17 10:29:52.136139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.136317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.136327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.913 qpair failed and we were unable to recover it. 00:33:18.913 [2024-04-17 10:29:52.136577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.136831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.136840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.913 qpair failed and we were unable to recover it. 00:33:18.913 [2024-04-17 10:29:52.137014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.137223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.137232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.913 qpair failed and we were unable to recover it. 00:33:18.913 [2024-04-17 10:29:52.137462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.137685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.137694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.913 qpair failed and we were unable to recover it. 00:33:18.913 [2024-04-17 10:29:52.137948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.138126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.138137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.913 qpair failed and we were unable to recover it. 00:33:18.913 [2024-04-17 10:29:52.138368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.138528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.138537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.913 qpair failed and we were unable to recover it. 00:33:18.913 [2024-04-17 10:29:52.138658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.138821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.138830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.913 qpair failed and we were unable to recover it. 00:33:18.913 [2024-04-17 10:29:52.138995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.139263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.139273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.913 qpair failed and we were unable to recover it. 00:33:18.913 [2024-04-17 10:29:52.139490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.139682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.139692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.913 qpair failed and we were unable to recover it. 00:33:18.913 [2024-04-17 10:29:52.139953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.140124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.140133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.913 qpair failed and we were unable to recover it. 00:33:18.913 [2024-04-17 10:29:52.140377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.140652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.140662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.913 qpair failed and we were unable to recover it. 00:33:18.913 [2024-04-17 10:29:52.140918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.141098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.141107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.913 qpair failed and we were unable to recover it. 00:33:18.913 [2024-04-17 10:29:52.141391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.141562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.141572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.913 qpair failed and we were unable to recover it. 00:33:18.913 [2024-04-17 10:29:52.141725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.141889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.141898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.913 qpair failed and we were unable to recover it. 00:33:18.913 [2024-04-17 10:29:52.142086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.142419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.142428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.913 qpair failed and we were unable to recover it. 00:33:18.913 [2024-04-17 10:29:52.142614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.142859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.142869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.913 qpair failed and we were unable to recover it. 00:33:18.913 [2024-04-17 10:29:52.143124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.143365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.143375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.913 qpair failed and we were unable to recover it. 00:33:18.913 [2024-04-17 10:29:52.143571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.143828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.143838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.913 qpair failed and we were unable to recover it. 00:33:18.913 [2024-04-17 10:29:52.144018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.144311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.144320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.913 qpair failed and we were unable to recover it. 00:33:18.913 [2024-04-17 10:29:52.144523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.144696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.144705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.913 qpair failed and we were unable to recover it. 00:33:18.913 [2024-04-17 10:29:52.144964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.145140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.145149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.913 qpair failed and we were unable to recover it. 00:33:18.913 [2024-04-17 10:29:52.145482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.913 [2024-04-17 10:29:52.145671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.145681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.145870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.145991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.146001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.146160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.146445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.146454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.146635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.146867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.146876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.147105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.147343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.147352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.147535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.147702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.147712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.147871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.148047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.148056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.148255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.148547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.148556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.148829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.148955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.148964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.149261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.149523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.149531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.149714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.149864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.149874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.150034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.150168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.150176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.150332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.150521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.150530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.150654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.150829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.150838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.150954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.151077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.151086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.151200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.151462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.151471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.151657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.151804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.151813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.151978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.152155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.152164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.152349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.152635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.152647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.152858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.153004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.153013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.153219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.153533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.153542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.153706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.153841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.153851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.154056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.154296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.154307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.154545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.154824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.154833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.155042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.155228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.155237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.155414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.155589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.155598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.155723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.155911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.155920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.156035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.156299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.156308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.914 [2024-04-17 10:29:52.156564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.156697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.914 [2024-04-17 10:29:52.156707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.914 qpair failed and we were unable to recover it. 00:33:18.915 [2024-04-17 10:29:52.156909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.157067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.157076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 [2024-04-17 10:29:52.157344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.157577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.157586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 [2024-04-17 10:29:52.157766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.158029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.158039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 [2024-04-17 10:29:52.158169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.158272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.158283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 [2024-04-17 10:29:52.158538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.158641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.158654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 [2024-04-17 10:29:52.158760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.158966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.158975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 [2024-04-17 10:29:52.159099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.159277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.159286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 [2024-04-17 10:29:52.159460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.159605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.159614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 [2024-04-17 10:29:52.159855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.160082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.160091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 [2024-04-17 10:29:52.160264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.160514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.160524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 [2024-04-17 10:29:52.160693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.160823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.160833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 [2024-04-17 10:29:52.160923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.161150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.161159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 [2024-04-17 10:29:52.161418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.161576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.161585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 [2024-04-17 10:29:52.161770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.161878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.161888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 [2024-04-17 10:29:52.162007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.162189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.162198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 [2024-04-17 10:29:52.162322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.162537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.162546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 [2024-04-17 10:29:52.162667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.162827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.162836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 [2024-04-17 10:29:52.162944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.163044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.163053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 [2024-04-17 10:29:52.163218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.163387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.163397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 [2024-04-17 10:29:52.163592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.163773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.163782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 [2024-04-17 10:29:52.163989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.164097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.164105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 [2024-04-17 10:29:52.164281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.164462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.164471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 [2024-04-17 10:29:52.164574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.164809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.164819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 [2024-04-17 10:29:52.165000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.165257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.165265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 [2024-04-17 10:29:52.165448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.165653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.165663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 [2024-04-17 10:29:52.165875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.166126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.166135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 [2024-04-17 10:29:52.166348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.166633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.166642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 10:29:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:18.915 [2024-04-17 10:29:52.166854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.166974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.915 [2024-04-17 10:29:52.166983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.915 qpair failed and we were unable to recover it. 00:33:18.915 10:29:52 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:18.915 [2024-04-17 10:29:52.167238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.167425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.167435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 10:29:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.916 [2024-04-17 10:29:52.167611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.167788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.167798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 10:29:52 -- common/autotest_common.sh@10 -- # set +x 00:33:18.916 [2024-04-17 10:29:52.167962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.168137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.168146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 [2024-04-17 10:29:52.168327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.168619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.168628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 [2024-04-17 10:29:52.168865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.169072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.169081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 [2024-04-17 10:29:52.169264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.169458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.169467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 [2024-04-17 10:29:52.169686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.169817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.169826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 [2024-04-17 10:29:52.169951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.170212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.170220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 [2024-04-17 10:29:52.170412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.170675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.170684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 [2024-04-17 10:29:52.170941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.171119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.171128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 [2024-04-17 10:29:52.171302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.171591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.171600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 [2024-04-17 10:29:52.171839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.171945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.171953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 [2024-04-17 10:29:52.172214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.172518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.172527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 [2024-04-17 10:29:52.172822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.173005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.173014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 [2024-04-17 10:29:52.173274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.173595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.173604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 [2024-04-17 10:29:52.173786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.173959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.173968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 [2024-04-17 10:29:52.174089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.174260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.174269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 [2024-04-17 10:29:52.174526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.174691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.174700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 [2024-04-17 10:29:52.174882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.175002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.175011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 [2024-04-17 10:29:52.175120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.175377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.175386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 [2024-04-17 10:29:52.175649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.175847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.175856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 [2024-04-17 10:29:52.176087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.176348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.176359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 [2024-04-17 10:29:52.176574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.176788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.176798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 [2024-04-17 10:29:52.177006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.177239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.177248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 [2024-04-17 10:29:52.177523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.177820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.177830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 [2024-04-17 10:29:52.178013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.178196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.178205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 [2024-04-17 10:29:52.178438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.178720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.178730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 [2024-04-17 10:29:52.178960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.179069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.179078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.916 qpair failed and we were unable to recover it. 00:33:18.916 [2024-04-17 10:29:52.179329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.916 [2024-04-17 10:29:52.179443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.179452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.917 qpair failed and we were unable to recover it. 00:33:18.917 [2024-04-17 10:29:52.179703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.179887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.179897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.917 qpair failed and we were unable to recover it. 00:33:18.917 [2024-04-17 10:29:52.180059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.180314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.180324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.917 qpair failed and we were unable to recover it. 00:33:18.917 [2024-04-17 10:29:52.180594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.180810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.180821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.917 qpair failed and we were unable to recover it. 00:33:18.917 [2024-04-17 10:29:52.181010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.181118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.181127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.917 qpair failed and we were unable to recover it. 00:33:18.917 [2024-04-17 10:29:52.181320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.181570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.181579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.917 qpair failed and we were unable to recover it. 00:33:18.917 [2024-04-17 10:29:52.181843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.181972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.181981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.917 qpair failed and we were unable to recover it. 00:33:18.917 [2024-04-17 10:29:52.182149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.182363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.182373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.917 qpair failed and we were unable to recover it. 00:33:18.917 [2024-04-17 10:29:52.182543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.182830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.182839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.917 qpair failed and we were unable to recover it. 00:33:18.917 [2024-04-17 10:29:52.183099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.183347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.183357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.917 qpair failed and we were unable to recover it. 00:33:18.917 [2024-04-17 10:29:52.183617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.183808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.183818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.917 qpair failed and we were unable to recover it. 00:33:18.917 [2024-04-17 10:29:52.183935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.184105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.184114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.917 qpair failed and we were unable to recover it. 00:33:18.917 [2024-04-17 10:29:52.184299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.184537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.184546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.917 qpair failed and we were unable to recover it. 00:33:18.917 [2024-04-17 10:29:52.184752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.184983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.184992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.917 qpair failed and we were unable to recover it. 00:33:18.917 [2024-04-17 10:29:52.185164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.185333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.185341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.917 qpair failed and we were unable to recover it. 00:33:18.917 [2024-04-17 10:29:52.185570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.185748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.185757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.917 qpair failed and we were unable to recover it. 00:33:18.917 [2024-04-17 10:29:52.185883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.186058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.186067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.917 qpair failed and we were unable to recover it. 00:33:18.917 [2024-04-17 10:29:52.186226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.186405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.186413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.917 qpair failed and we were unable to recover it. 00:33:18.917 [2024-04-17 10:29:52.186589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.186828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.186837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.917 qpair failed and we were unable to recover it. 00:33:18.917 Malloc0 00:33:18.917 [2024-04-17 10:29:52.187119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.187299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.187308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.917 qpair failed and we were unable to recover it. 00:33:18.917 [2024-04-17 10:29:52.187488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.187675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.187684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.917 qpair failed and we were unable to recover it. 00:33:18.917 10:29:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.917 [2024-04-17 10:29:52.187941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.188101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.188109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.917 qpair failed and we were unable to recover it. 00:33:18.917 10:29:52 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:18.917 [2024-04-17 10:29:52.188365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 10:29:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.917 [2024-04-17 10:29:52.188620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.188630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.917 qpair failed and we were unable to recover it. 00:33:18.917 10:29:52 -- common/autotest_common.sh@10 -- # set +x 00:33:18.917 [2024-04-17 10:29:52.188831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.189010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.189019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.917 qpair failed and we were unable to recover it. 00:33:18.917 [2024-04-17 10:29:52.189280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.189594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.189603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.917 qpair failed and we were unable to recover it. 00:33:18.917 [2024-04-17 10:29:52.189856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.917 [2024-04-17 10:29:52.190060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.190068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.918 [2024-04-17 10:29:52.190326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.190522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.190532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.918 [2024-04-17 10:29:52.190713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.190853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.190862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.918 [2024-04-17 10:29:52.191120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.191296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.191305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.918 [2024-04-17 10:29:52.191540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.191717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.191727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.918 [2024-04-17 10:29:52.191910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.192089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.192098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.918 [2024-04-17 10:29:52.192257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.192430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.192439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.918 [2024-04-17 10:29:52.192688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.192799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.192807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.918 [2024-04-17 10:29:52.193039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.193272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.193280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.918 [2024-04-17 10:29:52.193453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.193707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.193717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.918 [2024-04-17 10:29:52.193950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.194060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.194068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.918 [2024-04-17 10:29:52.194184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.194412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.194423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.918 [2024-04-17 10:29:52.194602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.194790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.194799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.918 [2024-04-17 10:29:52.194834] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:18.918 [2024-04-17 10:29:52.195033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.195127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.195135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.918 [2024-04-17 10:29:52.195308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.195485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.195494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.918 [2024-04-17 10:29:52.195734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.195988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.195997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.918 [2024-04-17 10:29:52.196175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.196379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.196387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.918 [2024-04-17 10:29:52.196646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.196844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.196853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.918 [2024-04-17 10:29:52.197098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.197319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.197329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.918 [2024-04-17 10:29:52.197574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.197734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.197743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.918 [2024-04-17 10:29:52.197983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.198218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.198227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.918 [2024-04-17 10:29:52.198442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.198701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.198712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.918 [2024-04-17 10:29:52.198943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.199213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.199222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.918 [2024-04-17 10:29:52.199404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.199661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.199670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.918 [2024-04-17 10:29:52.199852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.200107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.200116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.918 [2024-04-17 10:29:52.200369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.200545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.200554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.918 [2024-04-17 10:29:52.200801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.201060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.918 [2024-04-17 10:29:52.201069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.918 qpair failed and we were unable to recover it. 00:33:18.919 [2024-04-17 10:29:52.201268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.919 [2024-04-17 10:29:52.201514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.919 [2024-04-17 10:29:52.201523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.919 qpair failed and we were unable to recover it. 00:33:18.919 [2024-04-17 10:29:52.201800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.919 [2024-04-17 10:29:52.202054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.919 [2024-04-17 10:29:52.202063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.919 qpair failed and we were unable to recover it. 00:33:18.919 [2024-04-17 10:29:52.202266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.919 [2024-04-17 10:29:52.202540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.919 [2024-04-17 10:29:52.202549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.919 qpair failed and we were unable to recover it. 00:33:18.919 [2024-04-17 10:29:52.202758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.919 [2024-04-17 10:29:52.203012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.919 [2024-04-17 10:29:52.203021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.919 qpair failed and we were unable to recover it. 00:33:18.919 [2024-04-17 10:29:52.203200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.919 [2024-04-17 10:29:52.203453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.919 [2024-04-17 10:29:52.203464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.919 qpair failed and we were unable to recover it. 00:33:18.919 10:29:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:18.919 [2024-04-17 10:29:52.203700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.919 10:29:52 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:18.919 [2024-04-17 10:29:52.203962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.919 [2024-04-17 10:29:52.203972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.919 qpair failed and we were unable to recover it. 00:33:18.919 10:29:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:18.919 [2024-04-17 10:29:52.204211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.919 [2024-04-17 10:29:52.204452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.919 10:29:52 -- common/autotest_common.sh@10 -- # set +x 00:33:18.919 [2024-04-17 10:29:52.204462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.919 qpair failed and we were unable to recover it. 00:33:18.919 [2024-04-17 10:29:52.204666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.919 [2024-04-17 10:29:52.204854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.919 [2024-04-17 10:29:52.204864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.919 qpair failed and we were unable to recover it. 00:33:18.919 [2024-04-17 10:29:52.205122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.919 [2024-04-17 10:29:52.205362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.919 [2024-04-17 10:29:52.205370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.919 qpair failed and we were unable to recover it. 00:33:18.919 [2024-04-17 10:29:52.205640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.919 [2024-04-17 10:29:52.205830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.919 [2024-04-17 10:29:52.205840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:18.919 qpair failed and we were unable to recover it. 00:33:18.919 [2024-04-17 10:29:52.206018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.179 [2024-04-17 10:29:52.206270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.179 [2024-04-17 10:29:52.206279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.179 qpair failed and we were unable to recover it. 00:33:19.179 [2024-04-17 10:29:52.206536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.179 [2024-04-17 10:29:52.206723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.179 [2024-04-17 10:29:52.206733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.179 qpair failed and we were unable to recover it. 00:33:19.179 [2024-04-17 10:29:52.206987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.179 [2024-04-17 10:29:52.207233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.179 [2024-04-17 10:29:52.207242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.179 qpair failed and we were unable to recover it. 00:33:19.179 [2024-04-17 10:29:52.207504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.179 [2024-04-17 10:29:52.207711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.179 [2024-04-17 10:29:52.207720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.179 qpair failed and we were unable to recover it. 00:33:19.179 [2024-04-17 10:29:52.207908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.179 [2024-04-17 10:29:52.208079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.179 [2024-04-17 10:29:52.208088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.179 qpair failed and we were unable to recover it. 00:33:19.179 [2024-04-17 10:29:52.208269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.179 [2024-04-17 10:29:52.208496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.179 [2024-04-17 10:29:52.208505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.179 qpair failed and we were unable to recover it. 00:33:19.179 [2024-04-17 10:29:52.208702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.179 [2024-04-17 10:29:52.208868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.179 [2024-04-17 10:29:52.208877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.179 qpair failed and we were unable to recover it. 00:33:19.179 [2024-04-17 10:29:52.209155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.209336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.209356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.180 qpair failed and we were unable to recover it. 00:33:19.180 [2024-04-17 10:29:52.209602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.209760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.209769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.180 qpair failed and we were unable to recover it. 00:33:19.180 [2024-04-17 10:29:52.210025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.210252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.210261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.180 qpair failed and we were unable to recover it. 00:33:19.180 [2024-04-17 10:29:52.210525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.210804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.210813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.180 qpair failed and we were unable to recover it. 00:33:19.180 [2024-04-17 10:29:52.211092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.211373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.211382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.180 qpair failed and we were unable to recover it. 00:33:19.180 10:29:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:19.180 [2024-04-17 10:29:52.211651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 10:29:52 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:19.180 [2024-04-17 10:29:52.211862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.211872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.180 qpair failed and we were unable to recover it. 00:33:19.180 [2024-04-17 10:29:52.212052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 10:29:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:19.180 [2024-04-17 10:29:52.212260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.212270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.180 qpair failed and we were unable to recover it. 00:33:19.180 10:29:52 -- common/autotest_common.sh@10 -- # set +x 00:33:19.180 [2024-04-17 10:29:52.212477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.212586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.212595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.180 qpair failed and we were unable to recover it. 00:33:19.180 [2024-04-17 10:29:52.212767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.213048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.213057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.180 qpair failed and we were unable to recover it. 00:33:19.180 [2024-04-17 10:29:52.213218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.213379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.213388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.180 qpair failed and we were unable to recover it. 00:33:19.180 [2024-04-17 10:29:52.213621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.213847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.213856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.180 qpair failed and we were unable to recover it. 00:33:19.180 [2024-04-17 10:29:52.214117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.214241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.214250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.180 qpair failed and we were unable to recover it. 00:33:19.180 [2024-04-17 10:29:52.214442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.214697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.214706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.180 qpair failed and we were unable to recover it. 00:33:19.180 [2024-04-17 10:29:52.214965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.215085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.215094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.180 qpair failed and we were unable to recover it. 00:33:19.180 [2024-04-17 10:29:52.215189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.215444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.215453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.180 qpair failed and we were unable to recover it. 00:33:19.180 [2024-04-17 10:29:52.215617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.215778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.215787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.180 qpair failed and we were unable to recover it. 00:33:19.180 [2024-04-17 10:29:52.216006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.216275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.216283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.180 qpair failed and we were unable to recover it. 00:33:19.180 [2024-04-17 10:29:52.216448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.216704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.216713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.180 qpair failed and we were unable to recover it. 00:33:19.180 [2024-04-17 10:29:52.216920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.217198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.217206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.180 qpair failed and we were unable to recover it. 00:33:19.180 [2024-04-17 10:29:52.217401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.217582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.217591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.180 qpair failed and we were unable to recover it. 00:33:19.180 [2024-04-17 10:29:52.217703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.217973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.217982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.180 qpair failed and we were unable to recover it. 00:33:19.180 [2024-04-17 10:29:52.218171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.218398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.218407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.180 qpair failed and we were unable to recover it. 00:33:19.180 [2024-04-17 10:29:52.218597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.218836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.218845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.180 qpair failed and we were unable to recover it. 00:33:19.180 [2024-04-17 10:29:52.219102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.219274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.219283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.180 qpair failed and we were unable to recover it. 00:33:19.180 10:29:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:19.180 [2024-04-17 10:29:52.219545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.219749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.219758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.180 qpair failed and we were unable to recover it. 00:33:19.180 10:29:52 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:19.180 [2024-04-17 10:29:52.220014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.220171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.220181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.180 qpair failed and we were unable to recover it. 00:33:19.180 10:29:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:19.180 [2024-04-17 10:29:52.220358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 10:29:52 -- common/autotest_common.sh@10 -- # set +x 00:33:19.180 [2024-04-17 10:29:52.220601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.180 [2024-04-17 10:29:52.220611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.180 qpair failed and we were unable to recover it. 00:33:19.180 [2024-04-17 10:29:52.220853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.181 [2024-04-17 10:29:52.221034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.181 [2024-04-17 10:29:52.221044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.181 qpair failed and we were unable to recover it. 00:33:19.181 [2024-04-17 10:29:52.221299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.181 [2024-04-17 10:29:52.221571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.181 [2024-04-17 10:29:52.221580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.181 qpair failed and we were unable to recover it. 00:33:19.181 [2024-04-17 10:29:52.221821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.181 [2024-04-17 10:29:52.222086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.181 [2024-04-17 10:29:52.222095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.181 qpair failed and we were unable to recover it. 00:33:19.181 [2024-04-17 10:29:52.222347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.181 [2024-04-17 10:29:52.222548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.181 [2024-04-17 10:29:52.222557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.181 qpair failed and we were unable to recover it. 00:33:19.181 [2024-04-17 10:29:52.222787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.181 [2024-04-17 10:29:52.222879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.181 [2024-04-17 10:29:52.222888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2080000b90 with addr=10.0.0.2, port=4420 00:33:19.181 qpair failed and we were unable to recover it. 00:33:19.181 [2024-04-17 10:29:52.223107] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:19.181 [2024-04-17 10:29:52.225455] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.181 [2024-04-17 10:29:52.225544] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.181 [2024-04-17 10:29:52.225564] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.181 [2024-04-17 10:29:52.225571] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.181 [2024-04-17 10:29:52.225577] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.181 [2024-04-17 10:29:52.225595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.181 qpair failed and we were unable to recover it. 00:33:19.181 10:29:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:19.181 10:29:52 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:19.181 10:29:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:19.181 10:29:52 -- common/autotest_common.sh@10 -- # set +x 00:33:19.181 [2024-04-17 10:29:52.235473] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.181 [2024-04-17 10:29:52.235562] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.181 [2024-04-17 10:29:52.235578] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.181 [2024-04-17 10:29:52.235585] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.181 10:29:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:19.181 [2024-04-17 10:29:52.235590] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.181 [2024-04-17 10:29:52.235608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.181 qpair failed and we were unable to recover it. 00:33:19.181 10:29:52 -- host/target_disconnect.sh@58 -- # wait 3656141 00:33:19.181 [2024-04-17 10:29:52.245453] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.181 [2024-04-17 10:29:52.245529] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.181 [2024-04-17 10:29:52.245544] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.181 [2024-04-17 10:29:52.245550] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.181 [2024-04-17 10:29:52.245556] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.181 [2024-04-17 10:29:52.245570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.181 qpair failed and we were unable to recover it. 00:33:19.181 [2024-04-17 10:29:52.255407] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.181 [2024-04-17 10:29:52.255487] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.181 [2024-04-17 10:29:52.255503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.181 [2024-04-17 10:29:52.255509] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.181 [2024-04-17 10:29:52.255514] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.181 [2024-04-17 10:29:52.255528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.181 qpair failed and we were unable to recover it. 00:33:19.181 [2024-04-17 10:29:52.265438] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.181 [2024-04-17 10:29:52.265510] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.181 [2024-04-17 10:29:52.265525] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.181 [2024-04-17 10:29:52.265532] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.181 [2024-04-17 10:29:52.265537] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.181 [2024-04-17 10:29:52.265550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.181 qpair failed and we were unable to recover it. 00:33:19.181 [2024-04-17 10:29:52.275473] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.181 [2024-04-17 10:29:52.275550] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.181 [2024-04-17 10:29:52.275566] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.181 [2024-04-17 10:29:52.275574] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.181 [2024-04-17 10:29:52.275579] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.181 [2024-04-17 10:29:52.275592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.181 qpair failed and we were unable to recover it. 00:33:19.181 [2024-04-17 10:29:52.285512] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.181 [2024-04-17 10:29:52.285583] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.181 [2024-04-17 10:29:52.285598] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.181 [2024-04-17 10:29:52.285604] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.181 [2024-04-17 10:29:52.285609] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.181 [2024-04-17 10:29:52.285622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.181 qpair failed and we were unable to recover it. 00:33:19.181 [2024-04-17 10:29:52.295493] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.181 [2024-04-17 10:29:52.295570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.181 [2024-04-17 10:29:52.295584] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.181 [2024-04-17 10:29:52.295591] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.181 [2024-04-17 10:29:52.295596] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.181 [2024-04-17 10:29:52.295609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.181 qpair failed and we were unable to recover it. 00:33:19.181 [2024-04-17 10:29:52.305558] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.181 [2024-04-17 10:29:52.305635] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.181 [2024-04-17 10:29:52.305654] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.181 [2024-04-17 10:29:52.305660] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.181 [2024-04-17 10:29:52.305666] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.181 [2024-04-17 10:29:52.305679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.181 qpair failed and we were unable to recover it. 00:33:19.181 [2024-04-17 10:29:52.315577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.181 [2024-04-17 10:29:52.315654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.181 [2024-04-17 10:29:52.315669] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.181 [2024-04-17 10:29:52.315675] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.181 [2024-04-17 10:29:52.315680] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.181 [2024-04-17 10:29:52.315694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.181 qpair failed and we were unable to recover it. 00:33:19.181 [2024-04-17 10:29:52.325601] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.181 [2024-04-17 10:29:52.325671] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.181 [2024-04-17 10:29:52.325686] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.182 [2024-04-17 10:29:52.325692] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.182 [2024-04-17 10:29:52.325697] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.182 [2024-04-17 10:29:52.325710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.182 qpair failed and we were unable to recover it. 00:33:19.182 [2024-04-17 10:29:52.335611] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.182 [2024-04-17 10:29:52.335687] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.182 [2024-04-17 10:29:52.335702] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.182 [2024-04-17 10:29:52.335708] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.182 [2024-04-17 10:29:52.335713] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.182 [2024-04-17 10:29:52.335727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.182 qpair failed and we were unable to recover it. 00:33:19.182 [2024-04-17 10:29:52.345743] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.182 [2024-04-17 10:29:52.345815] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.182 [2024-04-17 10:29:52.345831] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.182 [2024-04-17 10:29:52.345837] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.182 [2024-04-17 10:29:52.345843] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.182 [2024-04-17 10:29:52.345857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.182 qpair failed and we were unable to recover it. 00:33:19.182 [2024-04-17 10:29:52.355844] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.182 [2024-04-17 10:29:52.355920] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.182 [2024-04-17 10:29:52.355935] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.182 [2024-04-17 10:29:52.355941] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.182 [2024-04-17 10:29:52.355947] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.182 [2024-04-17 10:29:52.355961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.182 qpair failed and we were unable to recover it. 00:33:19.182 [2024-04-17 10:29:52.365833] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.182 [2024-04-17 10:29:52.365916] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.182 [2024-04-17 10:29:52.365933] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.182 [2024-04-17 10:29:52.365939] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.182 [2024-04-17 10:29:52.365945] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.182 [2024-04-17 10:29:52.365958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.182 qpair failed and we were unable to recover it. 00:33:19.182 [2024-04-17 10:29:52.375837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.182 [2024-04-17 10:29:52.375940] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.182 [2024-04-17 10:29:52.375954] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.182 [2024-04-17 10:29:52.375962] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.182 [2024-04-17 10:29:52.375968] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.182 [2024-04-17 10:29:52.375982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.182 qpair failed and we were unable to recover it. 00:33:19.182 [2024-04-17 10:29:52.385834] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.182 [2024-04-17 10:29:52.385930] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.182 [2024-04-17 10:29:52.385945] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.182 [2024-04-17 10:29:52.385951] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.182 [2024-04-17 10:29:52.385957] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.182 [2024-04-17 10:29:52.385971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.182 qpair failed and we were unable to recover it. 00:33:19.182 [2024-04-17 10:29:52.395903] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.182 [2024-04-17 10:29:52.395977] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.182 [2024-04-17 10:29:52.395991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.182 [2024-04-17 10:29:52.395997] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.182 [2024-04-17 10:29:52.396003] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.182 [2024-04-17 10:29:52.396016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.182 qpair failed and we were unable to recover it. 00:33:19.182 [2024-04-17 10:29:52.405869] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.182 [2024-04-17 10:29:52.405936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.182 [2024-04-17 10:29:52.405953] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.182 [2024-04-17 10:29:52.405960] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.182 [2024-04-17 10:29:52.405967] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.182 [2024-04-17 10:29:52.405985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.182 qpair failed and we were unable to recover it. 00:33:19.182 [2024-04-17 10:29:52.415876] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.182 [2024-04-17 10:29:52.415947] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.182 [2024-04-17 10:29:52.415962] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.182 [2024-04-17 10:29:52.415969] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.182 [2024-04-17 10:29:52.415974] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.182 [2024-04-17 10:29:52.415987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.182 qpair failed and we were unable to recover it. 00:33:19.182 [2024-04-17 10:29:52.425926] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.182 [2024-04-17 10:29:52.425997] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.182 [2024-04-17 10:29:52.426011] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.182 [2024-04-17 10:29:52.426017] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.182 [2024-04-17 10:29:52.426022] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.182 [2024-04-17 10:29:52.426035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.182 qpair failed and we were unable to recover it. 00:33:19.182 [2024-04-17 10:29:52.435950] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.182 [2024-04-17 10:29:52.436025] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.182 [2024-04-17 10:29:52.436040] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.182 [2024-04-17 10:29:52.436047] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.182 [2024-04-17 10:29:52.436052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.182 [2024-04-17 10:29:52.436065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.182 qpair failed and we were unable to recover it. 00:33:19.182 [2024-04-17 10:29:52.445936] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.182 [2024-04-17 10:29:52.446011] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.182 [2024-04-17 10:29:52.446025] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.182 [2024-04-17 10:29:52.446032] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.182 [2024-04-17 10:29:52.446037] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.182 [2024-04-17 10:29:52.446051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.182 qpair failed and we were unable to recover it. 00:33:19.182 [2024-04-17 10:29:52.455949] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.182 [2024-04-17 10:29:52.456023] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.182 [2024-04-17 10:29:52.456040] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.182 [2024-04-17 10:29:52.456047] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.182 [2024-04-17 10:29:52.456052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.182 [2024-04-17 10:29:52.456065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.182 qpair failed and we were unable to recover it. 00:33:19.183 [2024-04-17 10:29:52.466029] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.183 [2024-04-17 10:29:52.466118] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.183 [2024-04-17 10:29:52.466133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.183 [2024-04-17 10:29:52.466139] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.183 [2024-04-17 10:29:52.466144] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.183 [2024-04-17 10:29:52.466157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.183 qpair failed and we were unable to recover it. 00:33:19.183 [2024-04-17 10:29:52.476094] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.183 [2024-04-17 10:29:52.476166] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.183 [2024-04-17 10:29:52.476181] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.183 [2024-04-17 10:29:52.476187] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.183 [2024-04-17 10:29:52.476192] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.183 [2024-04-17 10:29:52.476206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.183 qpair failed and we were unable to recover it. 00:33:19.183 [2024-04-17 10:29:52.486046] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.183 [2024-04-17 10:29:52.486127] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.183 [2024-04-17 10:29:52.486142] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.183 [2024-04-17 10:29:52.486148] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.183 [2024-04-17 10:29:52.486153] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.183 [2024-04-17 10:29:52.486165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.183 qpair failed and we were unable to recover it. 00:33:19.183 [2024-04-17 10:29:52.496118] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.183 [2024-04-17 10:29:52.496192] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.183 [2024-04-17 10:29:52.496207] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.183 [2024-04-17 10:29:52.496213] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.183 [2024-04-17 10:29:52.496218] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.183 [2024-04-17 10:29:52.496237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.183 qpair failed and we were unable to recover it. 00:33:19.183 [2024-04-17 10:29:52.506130] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.183 [2024-04-17 10:29:52.506215] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.183 [2024-04-17 10:29:52.506230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.183 [2024-04-17 10:29:52.506236] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.183 [2024-04-17 10:29:52.506241] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.183 [2024-04-17 10:29:52.506254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.183 qpair failed and we were unable to recover it. 00:33:19.443 [2024-04-17 10:29:52.516184] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.443 [2024-04-17 10:29:52.516259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.443 [2024-04-17 10:29:52.516274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.443 [2024-04-17 10:29:52.516280] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.443 [2024-04-17 10:29:52.516286] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.443 [2024-04-17 10:29:52.516298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.443 qpair failed and we were unable to recover it. 00:33:19.443 [2024-04-17 10:29:52.526253] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.443 [2024-04-17 10:29:52.526330] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.443 [2024-04-17 10:29:52.526345] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.443 [2024-04-17 10:29:52.526351] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.443 [2024-04-17 10:29:52.526356] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.443 [2024-04-17 10:29:52.526369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.443 qpair failed and we were unable to recover it. 00:33:19.443 [2024-04-17 10:29:52.536172] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.443 [2024-04-17 10:29:52.536260] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.443 [2024-04-17 10:29:52.536274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.443 [2024-04-17 10:29:52.536280] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.443 [2024-04-17 10:29:52.536286] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.443 [2024-04-17 10:29:52.536299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.443 qpair failed and we were unable to recover it. 00:33:19.443 [2024-04-17 10:29:52.546292] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.443 [2024-04-17 10:29:52.546369] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.443 [2024-04-17 10:29:52.546386] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.443 [2024-04-17 10:29:52.546393] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.443 [2024-04-17 10:29:52.546398] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.443 [2024-04-17 10:29:52.546412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.443 qpair failed and we were unable to recover it. 00:33:19.443 [2024-04-17 10:29:52.556311] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.443 [2024-04-17 10:29:52.556387] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.443 [2024-04-17 10:29:52.556402] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.443 [2024-04-17 10:29:52.556408] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.443 [2024-04-17 10:29:52.556413] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.443 [2024-04-17 10:29:52.556426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.443 qpair failed and we were unable to recover it. 00:33:19.443 [2024-04-17 10:29:52.566276] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.443 [2024-04-17 10:29:52.566346] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.443 [2024-04-17 10:29:52.566360] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.443 [2024-04-17 10:29:52.566366] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.443 [2024-04-17 10:29:52.566372] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.443 [2024-04-17 10:29:52.566385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.443 qpair failed and we were unable to recover it. 00:33:19.443 [2024-04-17 10:29:52.576372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.443 [2024-04-17 10:29:52.576445] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.443 [2024-04-17 10:29:52.576460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.443 [2024-04-17 10:29:52.576466] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.443 [2024-04-17 10:29:52.576471] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.443 [2024-04-17 10:29:52.576484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.443 qpair failed and we were unable to recover it. 00:33:19.443 [2024-04-17 10:29:52.586358] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.443 [2024-04-17 10:29:52.586436] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.443 [2024-04-17 10:29:52.586452] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.443 [2024-04-17 10:29:52.586458] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.443 [2024-04-17 10:29:52.586466] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.443 [2024-04-17 10:29:52.586479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.443 qpair failed and we were unable to recover it. 00:33:19.443 [2024-04-17 10:29:52.596404] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.443 [2024-04-17 10:29:52.596516] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.443 [2024-04-17 10:29:52.596532] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.443 [2024-04-17 10:29:52.596538] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.443 [2024-04-17 10:29:52.596543] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.443 [2024-04-17 10:29:52.596557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.443 qpair failed and we were unable to recover it. 00:33:19.443 [2024-04-17 10:29:52.606398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.443 [2024-04-17 10:29:52.606467] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.443 [2024-04-17 10:29:52.606482] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.443 [2024-04-17 10:29:52.606488] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.443 [2024-04-17 10:29:52.606493] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.443 [2024-04-17 10:29:52.606506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.443 qpair failed and we were unable to recover it. 00:33:19.443 [2024-04-17 10:29:52.616530] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.443 [2024-04-17 10:29:52.616599] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.443 [2024-04-17 10:29:52.616613] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.443 [2024-04-17 10:29:52.616619] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.443 [2024-04-17 10:29:52.616624] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.443 [2024-04-17 10:29:52.616637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.443 qpair failed and we were unable to recover it. 00:33:19.443 [2024-04-17 10:29:52.626431] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.443 [2024-04-17 10:29:52.626509] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.443 [2024-04-17 10:29:52.626523] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.443 [2024-04-17 10:29:52.626529] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.443 [2024-04-17 10:29:52.626534] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.443 [2024-04-17 10:29:52.626547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.443 qpair failed and we were unable to recover it. 00:33:19.443 [2024-04-17 10:29:52.636471] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.444 [2024-04-17 10:29:52.636552] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.444 [2024-04-17 10:29:52.636567] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.444 [2024-04-17 10:29:52.636573] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.444 [2024-04-17 10:29:52.636578] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.444 [2024-04-17 10:29:52.636591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.444 qpair failed and we were unable to recover it. 00:33:19.444 [2024-04-17 10:29:52.646534] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.444 [2024-04-17 10:29:52.646607] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.444 [2024-04-17 10:29:52.646622] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.444 [2024-04-17 10:29:52.646628] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.444 [2024-04-17 10:29:52.646633] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.444 [2024-04-17 10:29:52.646651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.444 qpair failed and we were unable to recover it. 00:33:19.444 [2024-04-17 10:29:52.656546] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.444 [2024-04-17 10:29:52.656621] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.444 [2024-04-17 10:29:52.656635] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.444 [2024-04-17 10:29:52.656641] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.444 [2024-04-17 10:29:52.656653] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.444 [2024-04-17 10:29:52.656666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.444 qpair failed and we were unable to recover it. 00:33:19.444 [2024-04-17 10:29:52.666651] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.444 [2024-04-17 10:29:52.666727] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.444 [2024-04-17 10:29:52.666742] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.444 [2024-04-17 10:29:52.666748] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.444 [2024-04-17 10:29:52.666753] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.444 [2024-04-17 10:29:52.666766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.444 qpair failed and we were unable to recover it. 00:33:19.444 [2024-04-17 10:29:52.676599] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.444 [2024-04-17 10:29:52.676709] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.444 [2024-04-17 10:29:52.676723] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.444 [2024-04-17 10:29:52.676729] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.444 [2024-04-17 10:29:52.676737] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.444 [2024-04-17 10:29:52.676751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.444 qpair failed and we were unable to recover it. 00:33:19.444 [2024-04-17 10:29:52.686679] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.444 [2024-04-17 10:29:52.686746] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.444 [2024-04-17 10:29:52.686760] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.444 [2024-04-17 10:29:52.686766] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.444 [2024-04-17 10:29:52.686771] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.444 [2024-04-17 10:29:52.686784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.444 qpair failed and we were unable to recover it. 00:33:19.444 [2024-04-17 10:29:52.696680] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.444 [2024-04-17 10:29:52.696753] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.444 [2024-04-17 10:29:52.696767] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.444 [2024-04-17 10:29:52.696773] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.444 [2024-04-17 10:29:52.696778] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.444 [2024-04-17 10:29:52.696791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.444 qpair failed and we were unable to recover it. 00:33:19.444 [2024-04-17 10:29:52.706732] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.444 [2024-04-17 10:29:52.706809] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.444 [2024-04-17 10:29:52.706823] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.444 [2024-04-17 10:29:52.706829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.444 [2024-04-17 10:29:52.706835] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.444 [2024-04-17 10:29:52.706848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.444 qpair failed and we were unable to recover it. 00:33:19.444 [2024-04-17 10:29:52.716740] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.444 [2024-04-17 10:29:52.716858] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.444 [2024-04-17 10:29:52.716872] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.444 [2024-04-17 10:29:52.716878] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.444 [2024-04-17 10:29:52.716883] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.444 [2024-04-17 10:29:52.716897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.444 qpair failed and we were unable to recover it. 00:33:19.444 [2024-04-17 10:29:52.726859] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.444 [2024-04-17 10:29:52.726936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.444 [2024-04-17 10:29:52.726950] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.444 [2024-04-17 10:29:52.726956] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.444 [2024-04-17 10:29:52.726961] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.444 [2024-04-17 10:29:52.726975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.444 qpair failed and we were unable to recover it. 00:33:19.444 [2024-04-17 10:29:52.736846] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.444 [2024-04-17 10:29:52.736923] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.444 [2024-04-17 10:29:52.736937] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.444 [2024-04-17 10:29:52.736944] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.444 [2024-04-17 10:29:52.736948] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.444 [2024-04-17 10:29:52.736962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.444 qpair failed and we were unable to recover it. 00:33:19.444 [2024-04-17 10:29:52.746811] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.444 [2024-04-17 10:29:52.746884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.444 [2024-04-17 10:29:52.746899] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.444 [2024-04-17 10:29:52.746905] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.444 [2024-04-17 10:29:52.746910] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.444 [2024-04-17 10:29:52.746923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.444 qpair failed and we were unable to recover it. 00:33:19.444 [2024-04-17 10:29:52.756858] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.444 [2024-04-17 10:29:52.756937] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.444 [2024-04-17 10:29:52.756952] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.444 [2024-04-17 10:29:52.756959] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.444 [2024-04-17 10:29:52.756964] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.444 [2024-04-17 10:29:52.756977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.444 qpair failed and we were unable to recover it. 00:33:19.444 [2024-04-17 10:29:52.766903] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.444 [2024-04-17 10:29:52.766977] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.444 [2024-04-17 10:29:52.766992] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.444 [2024-04-17 10:29:52.767001] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.445 [2024-04-17 10:29:52.767006] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.445 [2024-04-17 10:29:52.767019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.445 qpair failed and we were unable to recover it. 00:33:19.704 [2024-04-17 10:29:52.776990] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.704 [2024-04-17 10:29:52.777079] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.704 [2024-04-17 10:29:52.777094] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.704 [2024-04-17 10:29:52.777100] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.704 [2024-04-17 10:29:52.777105] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.704 [2024-04-17 10:29:52.777118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.704 qpair failed and we were unable to recover it. 00:33:19.704 [2024-04-17 10:29:52.786927] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.704 [2024-04-17 10:29:52.787003] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.704 [2024-04-17 10:29:52.787018] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.704 [2024-04-17 10:29:52.787024] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.704 [2024-04-17 10:29:52.787030] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.704 [2024-04-17 10:29:52.787043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.704 qpair failed and we were unable to recover it. 00:33:19.704 [2024-04-17 10:29:52.797056] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.704 [2024-04-17 10:29:52.797127] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.704 [2024-04-17 10:29:52.797142] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.704 [2024-04-17 10:29:52.797148] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.704 [2024-04-17 10:29:52.797153] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.704 [2024-04-17 10:29:52.797167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.704 qpair failed and we were unable to recover it. 00:33:19.704 [2024-04-17 10:29:52.807067] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.704 [2024-04-17 10:29:52.807145] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.704 [2024-04-17 10:29:52.807159] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.704 [2024-04-17 10:29:52.807165] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.704 [2024-04-17 10:29:52.807170] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.704 [2024-04-17 10:29:52.807183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.704 qpair failed and we were unable to recover it. 00:33:19.704 [2024-04-17 10:29:52.817096] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.704 [2024-04-17 10:29:52.817168] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.704 [2024-04-17 10:29:52.817182] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.704 [2024-04-17 10:29:52.817188] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.704 [2024-04-17 10:29:52.817193] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.704 [2024-04-17 10:29:52.817206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.704 qpair failed and we were unable to recover it. 00:33:19.704 [2024-04-17 10:29:52.827132] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.704 [2024-04-17 10:29:52.827205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.704 [2024-04-17 10:29:52.827219] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.704 [2024-04-17 10:29:52.827226] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.704 [2024-04-17 10:29:52.827231] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.704 [2024-04-17 10:29:52.827245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.704 qpair failed and we were unable to recover it. 00:33:19.704 [2024-04-17 10:29:52.837165] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.704 [2024-04-17 10:29:52.837233] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.704 [2024-04-17 10:29:52.837248] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.704 [2024-04-17 10:29:52.837254] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.704 [2024-04-17 10:29:52.837259] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.704 [2024-04-17 10:29:52.837272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.704 qpair failed and we were unable to recover it. 00:33:19.704 [2024-04-17 10:29:52.847185] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.704 [2024-04-17 10:29:52.847263] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.704 [2024-04-17 10:29:52.847278] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.704 [2024-04-17 10:29:52.847284] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.704 [2024-04-17 10:29:52.847290] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.704 [2024-04-17 10:29:52.847303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.704 qpair failed and we were unable to recover it. 00:33:19.704 [2024-04-17 10:29:52.857234] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.704 [2024-04-17 10:29:52.857349] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.704 [2024-04-17 10:29:52.857366] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.704 [2024-04-17 10:29:52.857372] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.704 [2024-04-17 10:29:52.857377] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.705 [2024-04-17 10:29:52.857390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.705 qpair failed and we were unable to recover it. 00:33:19.705 [2024-04-17 10:29:52.867190] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.705 [2024-04-17 10:29:52.867271] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.705 [2024-04-17 10:29:52.867286] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.705 [2024-04-17 10:29:52.867292] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.705 [2024-04-17 10:29:52.867297] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.705 [2024-04-17 10:29:52.867311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.705 qpair failed and we were unable to recover it. 00:33:19.705 [2024-04-17 10:29:52.877301] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.705 [2024-04-17 10:29:52.877376] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.705 [2024-04-17 10:29:52.877391] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.705 [2024-04-17 10:29:52.877397] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.705 [2024-04-17 10:29:52.877403] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.705 [2024-04-17 10:29:52.877416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.705 qpair failed and we were unable to recover it. 00:33:19.705 [2024-04-17 10:29:52.887329] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.705 [2024-04-17 10:29:52.887402] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.705 [2024-04-17 10:29:52.887417] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.705 [2024-04-17 10:29:52.887423] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.705 [2024-04-17 10:29:52.887428] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.705 [2024-04-17 10:29:52.887441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.705 qpair failed and we were unable to recover it. 00:33:19.705 [2024-04-17 10:29:52.897347] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.705 [2024-04-17 10:29:52.897418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.705 [2024-04-17 10:29:52.897433] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.705 [2024-04-17 10:29:52.897438] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.705 [2024-04-17 10:29:52.897444] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.705 [2024-04-17 10:29:52.897457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.705 qpair failed and we were unable to recover it. 00:33:19.705 [2024-04-17 10:29:52.907331] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.705 [2024-04-17 10:29:52.907398] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.705 [2024-04-17 10:29:52.907413] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.705 [2024-04-17 10:29:52.907419] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.705 [2024-04-17 10:29:52.907425] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.705 [2024-04-17 10:29:52.907437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.705 qpair failed and we were unable to recover it. 00:33:19.705 [2024-04-17 10:29:52.917399] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.705 [2024-04-17 10:29:52.917485] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.705 [2024-04-17 10:29:52.917499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.705 [2024-04-17 10:29:52.917505] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.705 [2024-04-17 10:29:52.917510] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.705 [2024-04-17 10:29:52.917524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.705 qpair failed and we were unable to recover it. 00:33:19.705 [2024-04-17 10:29:52.927455] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.705 [2024-04-17 10:29:52.927528] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.705 [2024-04-17 10:29:52.927543] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.705 [2024-04-17 10:29:52.927550] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.705 [2024-04-17 10:29:52.927555] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.705 [2024-04-17 10:29:52.927568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.705 qpair failed and we were unable to recover it. 00:33:19.705 [2024-04-17 10:29:52.937476] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.705 [2024-04-17 10:29:52.937598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.705 [2024-04-17 10:29:52.937613] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.705 [2024-04-17 10:29:52.937619] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.705 [2024-04-17 10:29:52.937624] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.705 [2024-04-17 10:29:52.937637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.705 qpair failed and we were unable to recover it. 00:33:19.705 [2024-04-17 10:29:52.947487] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.705 [2024-04-17 10:29:52.947565] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.705 [2024-04-17 10:29:52.947582] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.705 [2024-04-17 10:29:52.947589] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.705 [2024-04-17 10:29:52.947594] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.705 [2024-04-17 10:29:52.947606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.705 qpair failed and we were unable to recover it. 00:33:19.705 [2024-04-17 10:29:52.957541] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.705 [2024-04-17 10:29:52.957652] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.705 [2024-04-17 10:29:52.957667] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.705 [2024-04-17 10:29:52.957673] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.705 [2024-04-17 10:29:52.957678] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.705 [2024-04-17 10:29:52.957691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.705 qpair failed and we were unable to recover it. 00:33:19.705 [2024-04-17 10:29:52.967613] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.705 [2024-04-17 10:29:52.967697] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.705 [2024-04-17 10:29:52.967713] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.705 [2024-04-17 10:29:52.967719] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.705 [2024-04-17 10:29:52.967724] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.705 [2024-04-17 10:29:52.967737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.705 qpair failed and we were unable to recover it. 00:33:19.705 [2024-04-17 10:29:52.977527] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.705 [2024-04-17 10:29:52.977598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.705 [2024-04-17 10:29:52.977613] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.705 [2024-04-17 10:29:52.977619] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.705 [2024-04-17 10:29:52.977624] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.705 [2024-04-17 10:29:52.977637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.705 qpair failed and we were unable to recover it. 00:33:19.705 [2024-04-17 10:29:52.987624] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.705 [2024-04-17 10:29:52.987721] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.705 [2024-04-17 10:29:52.987736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.705 [2024-04-17 10:29:52.987742] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.705 [2024-04-17 10:29:52.987747] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.705 [2024-04-17 10:29:52.987763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.705 qpair failed and we were unable to recover it. 00:33:19.706 [2024-04-17 10:29:52.997693] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.706 [2024-04-17 10:29:52.997773] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.706 [2024-04-17 10:29:52.997787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.706 [2024-04-17 10:29:52.997794] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.706 [2024-04-17 10:29:52.997799] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.706 [2024-04-17 10:29:52.997812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.706 qpair failed and we were unable to recover it. 00:33:19.706 [2024-04-17 10:29:53.007681] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.706 [2024-04-17 10:29:53.007753] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.706 [2024-04-17 10:29:53.007768] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.706 [2024-04-17 10:29:53.007774] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.706 [2024-04-17 10:29:53.007779] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.706 [2024-04-17 10:29:53.007791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.706 qpair failed and we were unable to recover it. 00:33:19.706 [2024-04-17 10:29:53.017722] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.706 [2024-04-17 10:29:53.017797] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.706 [2024-04-17 10:29:53.017812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.706 [2024-04-17 10:29:53.017819] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.706 [2024-04-17 10:29:53.017824] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.706 [2024-04-17 10:29:53.017837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.706 qpair failed and we were unable to recover it. 00:33:19.706 [2024-04-17 10:29:53.027752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.706 [2024-04-17 10:29:53.027827] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.706 [2024-04-17 10:29:53.027842] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.706 [2024-04-17 10:29:53.027848] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.706 [2024-04-17 10:29:53.027853] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.706 [2024-04-17 10:29:53.027866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.706 qpair failed and we were unable to recover it. 00:33:19.965 [2024-04-17 10:29:53.037799] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.965 [2024-04-17 10:29:53.037869] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.965 [2024-04-17 10:29:53.037886] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.965 [2024-04-17 10:29:53.037892] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.965 [2024-04-17 10:29:53.037897] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.965 [2024-04-17 10:29:53.037910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.965 qpair failed and we were unable to recover it. 00:33:19.965 [2024-04-17 10:29:53.047823] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.965 [2024-04-17 10:29:53.047898] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.965 [2024-04-17 10:29:53.047913] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.965 [2024-04-17 10:29:53.047919] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.965 [2024-04-17 10:29:53.047924] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.965 [2024-04-17 10:29:53.047936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.965 qpair failed and we were unable to recover it. 00:33:19.965 [2024-04-17 10:29:53.057856] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.965 [2024-04-17 10:29:53.057931] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.965 [2024-04-17 10:29:53.057945] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.965 [2024-04-17 10:29:53.057952] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.965 [2024-04-17 10:29:53.057957] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.965 [2024-04-17 10:29:53.057970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.965 qpair failed and we were unable to recover it. 00:33:19.965 [2024-04-17 10:29:53.067904] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.965 [2024-04-17 10:29:53.067980] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.965 [2024-04-17 10:29:53.067995] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.965 [2024-04-17 10:29:53.068001] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.965 [2024-04-17 10:29:53.068006] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.965 [2024-04-17 10:29:53.068019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.965 qpair failed and we were unable to recover it. 00:33:19.965 [2024-04-17 10:29:53.077963] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.966 [2024-04-17 10:29:53.078056] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.966 [2024-04-17 10:29:53.078071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.966 [2024-04-17 10:29:53.078077] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.966 [2024-04-17 10:29:53.078085] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.966 [2024-04-17 10:29:53.078098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.966 qpair failed and we were unable to recover it. 00:33:19.966 [2024-04-17 10:29:53.087989] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.966 [2024-04-17 10:29:53.088062] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.966 [2024-04-17 10:29:53.088077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.966 [2024-04-17 10:29:53.088083] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.966 [2024-04-17 10:29:53.088088] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.966 [2024-04-17 10:29:53.088101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.966 qpair failed and we were unable to recover it. 00:33:19.966 [2024-04-17 10:29:53.097988] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.966 [2024-04-17 10:29:53.098061] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.966 [2024-04-17 10:29:53.098076] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.966 [2024-04-17 10:29:53.098082] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.966 [2024-04-17 10:29:53.098088] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.966 [2024-04-17 10:29:53.098101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.966 qpair failed and we were unable to recover it. 00:33:19.966 [2024-04-17 10:29:53.107998] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.966 [2024-04-17 10:29:53.108071] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.966 [2024-04-17 10:29:53.108085] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.966 [2024-04-17 10:29:53.108091] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.966 [2024-04-17 10:29:53.108097] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.966 [2024-04-17 10:29:53.108110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.966 qpair failed and we were unable to recover it. 00:33:19.966 [2024-04-17 10:29:53.118036] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.966 [2024-04-17 10:29:53.118163] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.966 [2024-04-17 10:29:53.118178] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.966 [2024-04-17 10:29:53.118184] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.966 [2024-04-17 10:29:53.118189] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.966 [2024-04-17 10:29:53.118202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.966 qpair failed and we were unable to recover it. 00:33:19.966 [2024-04-17 10:29:53.128059] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.966 [2024-04-17 10:29:53.128135] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.966 [2024-04-17 10:29:53.128149] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.966 [2024-04-17 10:29:53.128155] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.966 [2024-04-17 10:29:53.128160] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.966 [2024-04-17 10:29:53.128173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.966 qpair failed and we were unable to recover it. 00:33:19.966 [2024-04-17 10:29:53.138135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.966 [2024-04-17 10:29:53.138205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.966 [2024-04-17 10:29:53.138219] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.966 [2024-04-17 10:29:53.138225] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.966 [2024-04-17 10:29:53.138230] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.966 [2024-04-17 10:29:53.138243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.966 qpair failed and we were unable to recover it. 00:33:19.966 [2024-04-17 10:29:53.148161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.966 [2024-04-17 10:29:53.148237] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.966 [2024-04-17 10:29:53.148252] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.966 [2024-04-17 10:29:53.148258] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.966 [2024-04-17 10:29:53.148263] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.966 [2024-04-17 10:29:53.148277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.966 qpair failed and we were unable to recover it. 00:33:19.966 [2024-04-17 10:29:53.158188] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.966 [2024-04-17 10:29:53.158278] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.966 [2024-04-17 10:29:53.158292] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.966 [2024-04-17 10:29:53.158298] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.966 [2024-04-17 10:29:53.158303] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.966 [2024-04-17 10:29:53.158316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.966 qpair failed and we were unable to recover it. 00:33:19.966 [2024-04-17 10:29:53.168214] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.966 [2024-04-17 10:29:53.168284] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.966 [2024-04-17 10:29:53.168298] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.966 [2024-04-17 10:29:53.168304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.966 [2024-04-17 10:29:53.168312] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.966 [2024-04-17 10:29:53.168325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.966 qpair failed and we were unable to recover it. 00:33:19.966 [2024-04-17 10:29:53.178264] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.966 [2024-04-17 10:29:53.178381] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.966 [2024-04-17 10:29:53.178395] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.966 [2024-04-17 10:29:53.178401] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.966 [2024-04-17 10:29:53.178406] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.966 [2024-04-17 10:29:53.178419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.966 qpair failed and we were unable to recover it. 00:33:19.966 [2024-04-17 10:29:53.188237] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.966 [2024-04-17 10:29:53.188314] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.966 [2024-04-17 10:29:53.188328] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.966 [2024-04-17 10:29:53.188334] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.966 [2024-04-17 10:29:53.188339] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.966 [2024-04-17 10:29:53.188352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.966 qpair failed and we were unable to recover it. 00:33:19.966 [2024-04-17 10:29:53.198278] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.966 [2024-04-17 10:29:53.198356] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.966 [2024-04-17 10:29:53.198371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.966 [2024-04-17 10:29:53.198377] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.966 [2024-04-17 10:29:53.198381] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.966 [2024-04-17 10:29:53.198395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.966 qpair failed and we were unable to recover it. 00:33:19.966 [2024-04-17 10:29:53.208314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.966 [2024-04-17 10:29:53.208389] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.966 [2024-04-17 10:29:53.208405] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.967 [2024-04-17 10:29:53.208411] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.967 [2024-04-17 10:29:53.208416] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.967 [2024-04-17 10:29:53.208429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.967 qpair failed and we were unable to recover it. 00:33:19.967 [2024-04-17 10:29:53.218347] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.967 [2024-04-17 10:29:53.218424] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.967 [2024-04-17 10:29:53.218439] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.967 [2024-04-17 10:29:53.218445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.967 [2024-04-17 10:29:53.218451] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.967 [2024-04-17 10:29:53.218463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.967 qpair failed and we were unable to recover it. 00:33:19.967 [2024-04-17 10:29:53.228372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.967 [2024-04-17 10:29:53.228445] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.967 [2024-04-17 10:29:53.228460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.967 [2024-04-17 10:29:53.228466] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.967 [2024-04-17 10:29:53.228471] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.967 [2024-04-17 10:29:53.228484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.967 qpair failed and we were unable to recover it. 00:33:19.967 [2024-04-17 10:29:53.238399] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.967 [2024-04-17 10:29:53.238478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.967 [2024-04-17 10:29:53.238493] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.967 [2024-04-17 10:29:53.238499] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.967 [2024-04-17 10:29:53.238504] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.967 [2024-04-17 10:29:53.238517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.967 qpair failed and we were unable to recover it. 00:33:19.967 [2024-04-17 10:29:53.248438] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.967 [2024-04-17 10:29:53.248540] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.967 [2024-04-17 10:29:53.248555] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.967 [2024-04-17 10:29:53.248561] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.967 [2024-04-17 10:29:53.248566] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.967 [2024-04-17 10:29:53.248579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.967 qpair failed and we were unable to recover it. 00:33:19.967 [2024-04-17 10:29:53.258472] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.967 [2024-04-17 10:29:53.258580] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.967 [2024-04-17 10:29:53.258594] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.967 [2024-04-17 10:29:53.258604] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.967 [2024-04-17 10:29:53.258610] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.967 [2024-04-17 10:29:53.258623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.967 qpair failed and we were unable to recover it. 00:33:19.967 [2024-04-17 10:29:53.268534] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.967 [2024-04-17 10:29:53.268626] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.967 [2024-04-17 10:29:53.268641] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.967 [2024-04-17 10:29:53.268650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.967 [2024-04-17 10:29:53.268655] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.967 [2024-04-17 10:29:53.268668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.967 qpair failed and we were unable to recover it. 00:33:19.967 [2024-04-17 10:29:53.278521] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.967 [2024-04-17 10:29:53.278596] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.967 [2024-04-17 10:29:53.278611] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.967 [2024-04-17 10:29:53.278616] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.967 [2024-04-17 10:29:53.278622] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.967 [2024-04-17 10:29:53.278634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.967 qpair failed and we were unable to recover it. 00:33:19.967 [2024-04-17 10:29:53.288567] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.967 [2024-04-17 10:29:53.288667] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.967 [2024-04-17 10:29:53.288682] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.967 [2024-04-17 10:29:53.288688] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.967 [2024-04-17 10:29:53.288693] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:19.967 [2024-04-17 10:29:53.288706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.967 qpair failed and we were unable to recover it. 00:33:20.227 [2024-04-17 10:29:53.298568] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.227 [2024-04-17 10:29:53.298640] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.227 [2024-04-17 10:29:53.298658] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.227 [2024-04-17 10:29:53.298664] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.227 [2024-04-17 10:29:53.298669] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.227 [2024-04-17 10:29:53.298682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.227 qpair failed and we were unable to recover it. 00:33:20.227 [2024-04-17 10:29:53.308675] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.227 [2024-04-17 10:29:53.308759] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.227 [2024-04-17 10:29:53.308774] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.227 [2024-04-17 10:29:53.308780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.227 [2024-04-17 10:29:53.308785] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.227 [2024-04-17 10:29:53.308798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.227 qpair failed and we were unable to recover it. 00:33:20.227 [2024-04-17 10:29:53.318601] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.227 [2024-04-17 10:29:53.318687] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.227 [2024-04-17 10:29:53.318702] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.227 [2024-04-17 10:29:53.318707] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.227 [2024-04-17 10:29:53.318712] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.227 [2024-04-17 10:29:53.318726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.227 qpair failed and we were unable to recover it. 00:33:20.227 [2024-04-17 10:29:53.328647] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.227 [2024-04-17 10:29:53.328716] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.227 [2024-04-17 10:29:53.328730] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.227 [2024-04-17 10:29:53.328736] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.227 [2024-04-17 10:29:53.328741] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.227 [2024-04-17 10:29:53.328754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.227 qpair failed and we were unable to recover it. 00:33:20.227 [2024-04-17 10:29:53.338699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.227 [2024-04-17 10:29:53.338769] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.227 [2024-04-17 10:29:53.338784] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.227 [2024-04-17 10:29:53.338790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.227 [2024-04-17 10:29:53.338795] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.227 [2024-04-17 10:29:53.338808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.227 qpair failed and we were unable to recover it. 00:33:20.227 [2024-04-17 10:29:53.348719] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.227 [2024-04-17 10:29:53.348795] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.227 [2024-04-17 10:29:53.348809] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.227 [2024-04-17 10:29:53.348818] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.227 [2024-04-17 10:29:53.348823] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.227 [2024-04-17 10:29:53.348835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.227 qpair failed and we were unable to recover it. 00:33:20.227 [2024-04-17 10:29:53.358769] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.227 [2024-04-17 10:29:53.358844] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.227 [2024-04-17 10:29:53.358858] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.227 [2024-04-17 10:29:53.358864] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.227 [2024-04-17 10:29:53.358869] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.227 [2024-04-17 10:29:53.358881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.227 qpair failed and we were unable to recover it. 00:33:20.227 [2024-04-17 10:29:53.368806] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.227 [2024-04-17 10:29:53.368895] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.227 [2024-04-17 10:29:53.368910] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.227 [2024-04-17 10:29:53.368916] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.227 [2024-04-17 10:29:53.368921] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.227 [2024-04-17 10:29:53.368933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.227 qpair failed and we were unable to recover it. 00:33:20.227 [2024-04-17 10:29:53.378808] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.227 [2024-04-17 10:29:53.378879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.227 [2024-04-17 10:29:53.378893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.227 [2024-04-17 10:29:53.378899] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.227 [2024-04-17 10:29:53.378904] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.227 [2024-04-17 10:29:53.378918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.228 qpair failed and we were unable to recover it. 00:33:20.228 [2024-04-17 10:29:53.388840] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.228 [2024-04-17 10:29:53.388915] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.228 [2024-04-17 10:29:53.388930] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.228 [2024-04-17 10:29:53.388936] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.228 [2024-04-17 10:29:53.388941] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.228 [2024-04-17 10:29:53.388954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.228 qpair failed and we were unable to recover it. 00:33:20.228 [2024-04-17 10:29:53.398856] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.228 [2024-04-17 10:29:53.398939] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.228 [2024-04-17 10:29:53.398953] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.228 [2024-04-17 10:29:53.398960] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.228 [2024-04-17 10:29:53.398965] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.228 [2024-04-17 10:29:53.398978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.228 qpair failed and we were unable to recover it. 00:33:20.228 [2024-04-17 10:29:53.408960] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.228 [2024-04-17 10:29:53.409032] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.228 [2024-04-17 10:29:53.409046] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.228 [2024-04-17 10:29:53.409053] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.228 [2024-04-17 10:29:53.409058] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.228 [2024-04-17 10:29:53.409071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.228 qpair failed and we were unable to recover it. 00:33:20.228 [2024-04-17 10:29:53.418887] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.228 [2024-04-17 10:29:53.418962] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.228 [2024-04-17 10:29:53.418976] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.228 [2024-04-17 10:29:53.418982] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.228 [2024-04-17 10:29:53.418987] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.228 [2024-04-17 10:29:53.419000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.228 qpair failed and we were unable to recover it. 00:33:20.228 [2024-04-17 10:29:53.428954] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.228 [2024-04-17 10:29:53.429027] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.228 [2024-04-17 10:29:53.429042] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.228 [2024-04-17 10:29:53.429048] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.228 [2024-04-17 10:29:53.429053] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.228 [2024-04-17 10:29:53.429066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.228 qpair failed and we were unable to recover it. 00:33:20.228 [2024-04-17 10:29:53.439026] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.228 [2024-04-17 10:29:53.439098] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.228 [2024-04-17 10:29:53.439116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.228 [2024-04-17 10:29:53.439122] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.228 [2024-04-17 10:29:53.439127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.228 [2024-04-17 10:29:53.439140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.228 qpair failed and we were unable to recover it. 00:33:20.228 [2024-04-17 10:29:53.449067] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.228 [2024-04-17 10:29:53.449147] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.228 [2024-04-17 10:29:53.449161] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.228 [2024-04-17 10:29:53.449167] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.228 [2024-04-17 10:29:53.449172] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.228 [2024-04-17 10:29:53.449184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.228 qpair failed and we were unable to recover it. 00:33:20.228 [2024-04-17 10:29:53.459054] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.228 [2024-04-17 10:29:53.459146] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.228 [2024-04-17 10:29:53.459160] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.228 [2024-04-17 10:29:53.459165] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.228 [2024-04-17 10:29:53.459170] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.228 [2024-04-17 10:29:53.459184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.228 qpair failed and we were unable to recover it. 00:33:20.228 [2024-04-17 10:29:53.469081] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.228 [2024-04-17 10:29:53.469159] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.228 [2024-04-17 10:29:53.469173] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.228 [2024-04-17 10:29:53.469179] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.228 [2024-04-17 10:29:53.469184] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.228 [2024-04-17 10:29:53.469197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.228 qpair failed and we were unable to recover it. 00:33:20.228 [2024-04-17 10:29:53.479043] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.228 [2024-04-17 10:29:53.479142] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.228 [2024-04-17 10:29:53.479156] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.228 [2024-04-17 10:29:53.479162] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.228 [2024-04-17 10:29:53.479167] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.228 [2024-04-17 10:29:53.479183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.228 qpair failed and we were unable to recover it. 00:33:20.228 [2024-04-17 10:29:53.489138] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.228 [2024-04-17 10:29:53.489210] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.228 [2024-04-17 10:29:53.489224] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.228 [2024-04-17 10:29:53.489230] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.228 [2024-04-17 10:29:53.489235] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.228 [2024-04-17 10:29:53.489247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.228 qpair failed and we were unable to recover it. 00:33:20.228 [2024-04-17 10:29:53.499183] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.228 [2024-04-17 10:29:53.499293] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.228 [2024-04-17 10:29:53.499307] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.228 [2024-04-17 10:29:53.499313] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.228 [2024-04-17 10:29:53.499318] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.228 [2024-04-17 10:29:53.499331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.228 qpair failed and we were unable to recover it. 00:33:20.228 [2024-04-17 10:29:53.509193] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.228 [2024-04-17 10:29:53.509272] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.228 [2024-04-17 10:29:53.509286] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.229 [2024-04-17 10:29:53.509292] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.229 [2024-04-17 10:29:53.509297] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.229 [2024-04-17 10:29:53.509310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.229 qpair failed and we were unable to recover it. 00:33:20.229 [2024-04-17 10:29:53.519194] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.229 [2024-04-17 10:29:53.519266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.229 [2024-04-17 10:29:53.519280] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.229 [2024-04-17 10:29:53.519286] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.229 [2024-04-17 10:29:53.519291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.229 [2024-04-17 10:29:53.519304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.229 qpair failed and we were unable to recover it. 00:33:20.229 [2024-04-17 10:29:53.529306] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.229 [2024-04-17 10:29:53.529382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.229 [2024-04-17 10:29:53.529399] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.229 [2024-04-17 10:29:53.529405] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.229 [2024-04-17 10:29:53.529410] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.229 [2024-04-17 10:29:53.529422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.229 qpair failed and we were unable to recover it. 00:33:20.229 [2024-04-17 10:29:53.539293] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.229 [2024-04-17 10:29:53.539397] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.229 [2024-04-17 10:29:53.539411] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.229 [2024-04-17 10:29:53.539417] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.229 [2024-04-17 10:29:53.539422] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.229 [2024-04-17 10:29:53.539435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.229 qpair failed and we were unable to recover it. 00:33:20.229 [2024-04-17 10:29:53.549311] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.229 [2024-04-17 10:29:53.549410] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.229 [2024-04-17 10:29:53.549425] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.229 [2024-04-17 10:29:53.549431] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.229 [2024-04-17 10:29:53.549436] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.229 [2024-04-17 10:29:53.549449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.229 qpair failed and we were unable to recover it. 00:33:20.489 [2024-04-17 10:29:53.559344] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.489 [2024-04-17 10:29:53.559415] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.489 [2024-04-17 10:29:53.559429] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.489 [2024-04-17 10:29:53.559435] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.489 [2024-04-17 10:29:53.559440] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.489 [2024-04-17 10:29:53.559454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.489 qpair failed and we were unable to recover it. 00:33:20.489 [2024-04-17 10:29:53.569405] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.489 [2024-04-17 10:29:53.569484] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.489 [2024-04-17 10:29:53.569498] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.489 [2024-04-17 10:29:53.569505] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.489 [2024-04-17 10:29:53.569510] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.489 [2024-04-17 10:29:53.569526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.489 qpair failed and we were unable to recover it. 00:33:20.489 [2024-04-17 10:29:53.579418] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.489 [2024-04-17 10:29:53.579492] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.489 [2024-04-17 10:29:53.579506] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.489 [2024-04-17 10:29:53.579512] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.489 [2024-04-17 10:29:53.579517] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.489 [2024-04-17 10:29:53.579530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.489 qpair failed and we were unable to recover it. 00:33:20.489 [2024-04-17 10:29:53.589430] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.489 [2024-04-17 10:29:53.589503] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.489 [2024-04-17 10:29:53.589519] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.489 [2024-04-17 10:29:53.589526] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.489 [2024-04-17 10:29:53.589531] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.489 [2024-04-17 10:29:53.589543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.489 qpair failed and we were unable to recover it. 00:33:20.489 [2024-04-17 10:29:53.599487] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.489 [2024-04-17 10:29:53.599557] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.489 [2024-04-17 10:29:53.599572] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.489 [2024-04-17 10:29:53.599578] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.489 [2024-04-17 10:29:53.599583] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.489 [2024-04-17 10:29:53.599596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.489 qpair failed and we were unable to recover it. 00:33:20.489 [2024-04-17 10:29:53.609543] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.489 [2024-04-17 10:29:53.609613] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.489 [2024-04-17 10:29:53.609628] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.489 [2024-04-17 10:29:53.609634] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.489 [2024-04-17 10:29:53.609639] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.489 [2024-04-17 10:29:53.609658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.489 qpair failed and we were unable to recover it. 00:33:20.489 [2024-04-17 10:29:53.619559] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.489 [2024-04-17 10:29:53.619633] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.489 [2024-04-17 10:29:53.619652] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.489 [2024-04-17 10:29:53.619659] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.489 [2024-04-17 10:29:53.619664] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.489 [2024-04-17 10:29:53.619677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.489 qpair failed and we were unable to recover it. 00:33:20.489 [2024-04-17 10:29:53.629477] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.489 [2024-04-17 10:29:53.629548] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.489 [2024-04-17 10:29:53.629563] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.489 [2024-04-17 10:29:53.629569] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.489 [2024-04-17 10:29:53.629574] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.489 [2024-04-17 10:29:53.629587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.489 qpair failed and we were unable to recover it. 00:33:20.490 [2024-04-17 10:29:53.639509] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.490 [2024-04-17 10:29:53.639581] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.490 [2024-04-17 10:29:53.639595] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.490 [2024-04-17 10:29:53.639601] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.490 [2024-04-17 10:29:53.639607] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.490 [2024-04-17 10:29:53.639620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.490 qpair failed and we were unable to recover it. 00:33:20.490 [2024-04-17 10:29:53.649545] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.490 [2024-04-17 10:29:53.649659] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.490 [2024-04-17 10:29:53.649673] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.490 [2024-04-17 10:29:53.649679] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.490 [2024-04-17 10:29:53.649685] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.490 [2024-04-17 10:29:53.649698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.490 qpair failed and we were unable to recover it. 00:33:20.490 [2024-04-17 10:29:53.659663] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.490 [2024-04-17 10:29:53.659773] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.490 [2024-04-17 10:29:53.659787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.490 [2024-04-17 10:29:53.659793] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.490 [2024-04-17 10:29:53.659802] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.490 [2024-04-17 10:29:53.659815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.490 qpair failed and we were unable to recover it. 00:33:20.490 [2024-04-17 10:29:53.669686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.490 [2024-04-17 10:29:53.669757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.490 [2024-04-17 10:29:53.669771] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.490 [2024-04-17 10:29:53.669777] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.490 [2024-04-17 10:29:53.669782] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.490 [2024-04-17 10:29:53.669795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.490 qpair failed and we were unable to recover it. 00:33:20.490 [2024-04-17 10:29:53.679709] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.490 [2024-04-17 10:29:53.679775] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.490 [2024-04-17 10:29:53.679790] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.490 [2024-04-17 10:29:53.679796] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.490 [2024-04-17 10:29:53.679802] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.490 [2024-04-17 10:29:53.679815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.490 qpair failed and we were unable to recover it. 00:33:20.490 [2024-04-17 10:29:53.689701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.490 [2024-04-17 10:29:53.689772] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.490 [2024-04-17 10:29:53.689787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.490 [2024-04-17 10:29:53.689792] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.490 [2024-04-17 10:29:53.689798] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.490 [2024-04-17 10:29:53.689811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.490 qpair failed and we were unable to recover it. 00:33:20.490 [2024-04-17 10:29:53.699830] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.490 [2024-04-17 10:29:53.699935] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.490 [2024-04-17 10:29:53.699950] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.490 [2024-04-17 10:29:53.699955] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.490 [2024-04-17 10:29:53.699961] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.490 [2024-04-17 10:29:53.699974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.490 qpair failed and we were unable to recover it. 00:33:20.490 [2024-04-17 10:29:53.709807] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.490 [2024-04-17 10:29:53.709895] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.490 [2024-04-17 10:29:53.709910] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.490 [2024-04-17 10:29:53.709916] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.490 [2024-04-17 10:29:53.709921] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.490 [2024-04-17 10:29:53.709933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.490 qpair failed and we were unable to recover it. 00:33:20.490 [2024-04-17 10:29:53.719832] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.490 [2024-04-17 10:29:53.719905] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.490 [2024-04-17 10:29:53.719920] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.490 [2024-04-17 10:29:53.719926] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.490 [2024-04-17 10:29:53.719931] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.490 [2024-04-17 10:29:53.719944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.490 qpair failed and we were unable to recover it. 00:33:20.490 [2024-04-17 10:29:53.729937] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.490 [2024-04-17 10:29:53.730019] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.490 [2024-04-17 10:29:53.730033] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.490 [2024-04-17 10:29:53.730039] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.490 [2024-04-17 10:29:53.730044] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.490 [2024-04-17 10:29:53.730057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.490 qpair failed and we were unable to recover it. 00:33:20.490 [2024-04-17 10:29:53.739941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.490 [2024-04-17 10:29:53.740013] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.490 [2024-04-17 10:29:53.740027] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.490 [2024-04-17 10:29:53.740033] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.490 [2024-04-17 10:29:53.740039] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.490 [2024-04-17 10:29:53.740052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.490 qpair failed and we were unable to recover it. 00:33:20.490 [2024-04-17 10:29:53.749934] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.490 [2024-04-17 10:29:53.750008] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.490 [2024-04-17 10:29:53.750023] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.490 [2024-04-17 10:29:53.750032] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.490 [2024-04-17 10:29:53.750037] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.490 [2024-04-17 10:29:53.750050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.490 qpair failed and we were unable to recover it. 00:33:20.490 [2024-04-17 10:29:53.759894] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.490 [2024-04-17 10:29:53.759985] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.490 [2024-04-17 10:29:53.759999] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.490 [2024-04-17 10:29:53.760005] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.490 [2024-04-17 10:29:53.760011] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.490 [2024-04-17 10:29:53.760023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.490 qpair failed and we were unable to recover it. 00:33:20.490 [2024-04-17 10:29:53.769985] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.490 [2024-04-17 10:29:53.770058] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.491 [2024-04-17 10:29:53.770072] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.491 [2024-04-17 10:29:53.770079] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.491 [2024-04-17 10:29:53.770084] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.491 [2024-04-17 10:29:53.770096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.491 qpair failed and we were unable to recover it. 00:33:20.491 [2024-04-17 10:29:53.780044] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.491 [2024-04-17 10:29:53.780114] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.491 [2024-04-17 10:29:53.780129] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.491 [2024-04-17 10:29:53.780135] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.491 [2024-04-17 10:29:53.780140] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.491 [2024-04-17 10:29:53.780152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.491 qpair failed and we were unable to recover it. 00:33:20.491 [2024-04-17 10:29:53.790073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.491 [2024-04-17 10:29:53.790152] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.491 [2024-04-17 10:29:53.790166] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.491 [2024-04-17 10:29:53.790172] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.491 [2024-04-17 10:29:53.790177] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.491 [2024-04-17 10:29:53.790190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.491 qpair failed and we were unable to recover it. 00:33:20.491 [2024-04-17 10:29:53.800077] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.491 [2024-04-17 10:29:53.800151] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.491 [2024-04-17 10:29:53.800166] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.491 [2024-04-17 10:29:53.800172] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.491 [2024-04-17 10:29:53.800177] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.491 [2024-04-17 10:29:53.800189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.491 qpair failed and we were unable to recover it. 00:33:20.491 [2024-04-17 10:29:53.810099] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.491 [2024-04-17 10:29:53.810171] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.491 [2024-04-17 10:29:53.810185] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.491 [2024-04-17 10:29:53.810191] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.491 [2024-04-17 10:29:53.810196] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.491 [2024-04-17 10:29:53.810209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.491 qpair failed and we were unable to recover it. 00:33:20.751 [2024-04-17 10:29:53.820131] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.751 [2024-04-17 10:29:53.820225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.751 [2024-04-17 10:29:53.820239] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.751 [2024-04-17 10:29:53.820245] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.751 [2024-04-17 10:29:53.820250] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.751 [2024-04-17 10:29:53.820263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.751 qpair failed and we were unable to recover it. 00:33:20.751 [2024-04-17 10:29:53.830213] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.751 [2024-04-17 10:29:53.830292] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.751 [2024-04-17 10:29:53.830306] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.751 [2024-04-17 10:29:53.830312] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.751 [2024-04-17 10:29:53.830317] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.751 [2024-04-17 10:29:53.830330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.751 qpair failed and we were unable to recover it. 00:33:20.751 [2024-04-17 10:29:53.840220] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.751 [2024-04-17 10:29:53.840306] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.751 [2024-04-17 10:29:53.840320] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.751 [2024-04-17 10:29:53.840328] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.751 [2024-04-17 10:29:53.840334] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.751 [2024-04-17 10:29:53.840346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.751 qpair failed and we were unable to recover it. 00:33:20.751 [2024-04-17 10:29:53.850214] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.751 [2024-04-17 10:29:53.850281] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.751 [2024-04-17 10:29:53.850295] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.751 [2024-04-17 10:29:53.850301] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.751 [2024-04-17 10:29:53.850306] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.751 [2024-04-17 10:29:53.850319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.751 qpair failed and we were unable to recover it. 00:33:20.751 [2024-04-17 10:29:53.860283] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.751 [2024-04-17 10:29:53.860396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.751 [2024-04-17 10:29:53.860410] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.751 [2024-04-17 10:29:53.860416] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.751 [2024-04-17 10:29:53.860421] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.751 [2024-04-17 10:29:53.860435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.751 qpair failed and we were unable to recover it. 00:33:20.751 [2024-04-17 10:29:53.870335] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.751 [2024-04-17 10:29:53.870426] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.751 [2024-04-17 10:29:53.870440] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.751 [2024-04-17 10:29:53.870446] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.751 [2024-04-17 10:29:53.870451] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.751 [2024-04-17 10:29:53.870464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.751 qpair failed and we were unable to recover it. 00:33:20.751 [2024-04-17 10:29:53.880310] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.751 [2024-04-17 10:29:53.880381] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.751 [2024-04-17 10:29:53.880396] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.751 [2024-04-17 10:29:53.880401] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.751 [2024-04-17 10:29:53.880407] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.751 [2024-04-17 10:29:53.880420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.751 qpair failed and we were unable to recover it. 00:33:20.751 [2024-04-17 10:29:53.890396] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.751 [2024-04-17 10:29:53.890495] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.751 [2024-04-17 10:29:53.890510] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.751 [2024-04-17 10:29:53.890516] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.751 [2024-04-17 10:29:53.890521] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.751 [2024-04-17 10:29:53.890534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.751 qpair failed and we were unable to recover it. 00:33:20.751 [2024-04-17 10:29:53.900318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.751 [2024-04-17 10:29:53.900393] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.751 [2024-04-17 10:29:53.900408] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.751 [2024-04-17 10:29:53.900413] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.751 [2024-04-17 10:29:53.900419] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.751 [2024-04-17 10:29:53.900431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.751 qpair failed and we were unable to recover it. 00:33:20.751 [2024-04-17 10:29:53.910420] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.751 [2024-04-17 10:29:53.910492] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.751 [2024-04-17 10:29:53.910507] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.751 [2024-04-17 10:29:53.910512] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.751 [2024-04-17 10:29:53.910518] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.751 [2024-04-17 10:29:53.910531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.751 qpair failed and we were unable to recover it. 00:33:20.751 [2024-04-17 10:29:53.920419] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.751 [2024-04-17 10:29:53.920494] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.751 [2024-04-17 10:29:53.920509] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.752 [2024-04-17 10:29:53.920516] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.752 [2024-04-17 10:29:53.920521] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.752 [2024-04-17 10:29:53.920534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.752 qpair failed and we were unable to recover it. 00:33:20.752 [2024-04-17 10:29:53.930463] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.752 [2024-04-17 10:29:53.930582] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.752 [2024-04-17 10:29:53.930600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.752 [2024-04-17 10:29:53.930606] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.752 [2024-04-17 10:29:53.930611] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.752 [2024-04-17 10:29:53.930624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.752 qpair failed and we were unable to recover it. 00:33:20.752 [2024-04-17 10:29:53.940457] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.752 [2024-04-17 10:29:53.940576] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.752 [2024-04-17 10:29:53.940590] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.752 [2024-04-17 10:29:53.940596] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.752 [2024-04-17 10:29:53.940601] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.752 [2024-04-17 10:29:53.940614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.752 qpair failed and we were unable to recover it. 00:33:20.752 [2024-04-17 10:29:53.950546] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.752 [2024-04-17 10:29:53.950626] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.752 [2024-04-17 10:29:53.950641] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.752 [2024-04-17 10:29:53.950651] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.752 [2024-04-17 10:29:53.950656] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.752 [2024-04-17 10:29:53.950670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.752 qpair failed and we were unable to recover it. 00:33:20.752 [2024-04-17 10:29:53.960558] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.752 [2024-04-17 10:29:53.960630] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.752 [2024-04-17 10:29:53.960650] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.752 [2024-04-17 10:29:53.960657] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.752 [2024-04-17 10:29:53.960662] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.752 [2024-04-17 10:29:53.960675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.752 qpair failed and we were unable to recover it. 00:33:20.752 [2024-04-17 10:29:53.970618] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.752 [2024-04-17 10:29:53.970693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.752 [2024-04-17 10:29:53.970707] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.752 [2024-04-17 10:29:53.970714] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.752 [2024-04-17 10:29:53.970719] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.752 [2024-04-17 10:29:53.970735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.752 qpair failed and we were unable to recover it. 00:33:20.752 [2024-04-17 10:29:53.980677] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.752 [2024-04-17 10:29:53.980786] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.752 [2024-04-17 10:29:53.980801] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.752 [2024-04-17 10:29:53.980807] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.752 [2024-04-17 10:29:53.980812] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.752 [2024-04-17 10:29:53.980826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.752 qpair failed and we were unable to recover it. 00:33:20.752 [2024-04-17 10:29:53.990609] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.752 [2024-04-17 10:29:53.990689] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.752 [2024-04-17 10:29:53.990705] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.752 [2024-04-17 10:29:53.990711] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.752 [2024-04-17 10:29:53.990716] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:20.752 [2024-04-17 10:29:53.990730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.752 qpair failed and we were unable to recover it. 00:33:20.752 [2024-04-17 10:29:54.000737] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.752 [2024-04-17 10:29:54.000876] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.752 [2024-04-17 10:29:54.000923] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.752 [2024-04-17 10:29:54.000944] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.752 [2024-04-17 10:29:54.000960] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:20.752 [2024-04-17 10:29:54.001000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.752 qpair failed and we were unable to recover it. 00:33:20.752 [2024-04-17 10:29:54.010736] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.752 [2024-04-17 10:29:54.010860] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.752 [2024-04-17 10:29:54.010886] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.752 [2024-04-17 10:29:54.010899] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.752 [2024-04-17 10:29:54.010909] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:20.752 [2024-04-17 10:29:54.010933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.752 qpair failed and we were unable to recover it. 00:33:20.752 [2024-04-17 10:29:54.020793] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.752 [2024-04-17 10:29:54.020896] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.752 [2024-04-17 10:29:54.020922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.752 [2024-04-17 10:29:54.020933] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.752 [2024-04-17 10:29:54.020942] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:20.752 [2024-04-17 10:29:54.020961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.752 qpair failed and we were unable to recover it. 00:33:20.752 [2024-04-17 10:29:54.030845] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.752 [2024-04-17 10:29:54.030968] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.752 [2024-04-17 10:29:54.030990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.752 [2024-04-17 10:29:54.031000] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.752 [2024-04-17 10:29:54.031009] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:20.752 [2024-04-17 10:29:54.031028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.752 qpair failed and we were unable to recover it. 00:33:20.752 [2024-04-17 10:29:54.040855] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.752 [2024-04-17 10:29:54.040944] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.752 [2024-04-17 10:29:54.040965] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.752 [2024-04-17 10:29:54.040975] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.752 [2024-04-17 10:29:54.040984] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:20.752 [2024-04-17 10:29:54.041003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.752 qpair failed and we were unable to recover it. 00:33:20.752 [2024-04-17 10:29:54.050873] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.752 [2024-04-17 10:29:54.050962] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.752 [2024-04-17 10:29:54.050983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.752 [2024-04-17 10:29:54.050992] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.752 [2024-04-17 10:29:54.051001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:20.753 [2024-04-17 10:29:54.051020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.753 qpair failed and we were unable to recover it. 00:33:20.753 [2024-04-17 10:29:54.060899] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.753 [2024-04-17 10:29:54.060989] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.753 [2024-04-17 10:29:54.061009] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.753 [2024-04-17 10:29:54.061019] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.753 [2024-04-17 10:29:54.061028] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:20.753 [2024-04-17 10:29:54.061050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.753 qpair failed and we were unable to recover it. 00:33:20.753 [2024-04-17 10:29:54.070894] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.753 [2024-04-17 10:29:54.071023] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.753 [2024-04-17 10:29:54.071043] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.753 [2024-04-17 10:29:54.071053] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.753 [2024-04-17 10:29:54.071062] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:20.753 [2024-04-17 10:29:54.071080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.753 qpair failed and we were unable to recover it. 00:33:20.753 [2024-04-17 10:29:54.080885] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:20.753 [2024-04-17 10:29:54.080976] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:20.753 [2024-04-17 10:29:54.080996] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:20.753 [2024-04-17 10:29:54.081005] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:20.753 [2024-04-17 10:29:54.081014] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:20.753 [2024-04-17 10:29:54.081032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:20.753 qpair failed and we were unable to recover it. 00:33:21.014 [2024-04-17 10:29:54.091075] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.014 [2024-04-17 10:29:54.091163] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.014 [2024-04-17 10:29:54.091184] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.014 [2024-04-17 10:29:54.091193] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.014 [2024-04-17 10:29:54.091202] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.014 [2024-04-17 10:29:54.091220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.014 qpair failed and we were unable to recover it. 00:33:21.014 [2024-04-17 10:29:54.101012] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.014 [2024-04-17 10:29:54.101104] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.014 [2024-04-17 10:29:54.101124] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.014 [2024-04-17 10:29:54.101134] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.014 [2024-04-17 10:29:54.101142] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.014 [2024-04-17 10:29:54.101160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.014 qpair failed and we were unable to recover it. 00:33:21.014 [2024-04-17 10:29:54.111050] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.014 [2024-04-17 10:29:54.111136] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.014 [2024-04-17 10:29:54.111161] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.014 [2024-04-17 10:29:54.111170] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.014 [2024-04-17 10:29:54.111179] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.014 [2024-04-17 10:29:54.111197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.014 qpair failed and we were unable to recover it. 00:33:21.014 [2024-04-17 10:29:54.121124] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.014 [2024-04-17 10:29:54.121205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.014 [2024-04-17 10:29:54.121226] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.014 [2024-04-17 10:29:54.121235] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.014 [2024-04-17 10:29:54.121243] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.014 [2024-04-17 10:29:54.121262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.014 qpair failed and we were unable to recover it. 00:33:21.014 [2024-04-17 10:29:54.131097] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.014 [2024-04-17 10:29:54.131190] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.014 [2024-04-17 10:29:54.131211] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.014 [2024-04-17 10:29:54.131220] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.014 [2024-04-17 10:29:54.131228] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.014 [2024-04-17 10:29:54.131247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.014 qpair failed and we were unable to recover it. 00:33:21.014 [2024-04-17 10:29:54.141215] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.014 [2024-04-17 10:29:54.141306] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.014 [2024-04-17 10:29:54.141326] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.014 [2024-04-17 10:29:54.141335] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.014 [2024-04-17 10:29:54.141344] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.014 [2024-04-17 10:29:54.141362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.014 qpair failed and we were unable to recover it. 00:33:21.014 [2024-04-17 10:29:54.151211] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.014 [2024-04-17 10:29:54.151304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.014 [2024-04-17 10:29:54.151324] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.014 [2024-04-17 10:29:54.151334] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.014 [2024-04-17 10:29:54.151342] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.014 [2024-04-17 10:29:54.151365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.014 qpair failed and we were unable to recover it. 00:33:21.014 [2024-04-17 10:29:54.161172] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.014 [2024-04-17 10:29:54.161261] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.014 [2024-04-17 10:29:54.161281] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.014 [2024-04-17 10:29:54.161290] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.014 [2024-04-17 10:29:54.161298] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.014 [2024-04-17 10:29:54.161316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.014 qpair failed and we were unable to recover it. 00:33:21.014 [2024-04-17 10:29:54.171248] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.014 [2024-04-17 10:29:54.171341] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.014 [2024-04-17 10:29:54.171361] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.014 [2024-04-17 10:29:54.171374] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.014 [2024-04-17 10:29:54.171385] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.014 [2024-04-17 10:29:54.171404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.014 qpair failed and we were unable to recover it. 00:33:21.014 [2024-04-17 10:29:54.181295] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.014 [2024-04-17 10:29:54.181382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.014 [2024-04-17 10:29:54.181402] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.014 [2024-04-17 10:29:54.181411] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.014 [2024-04-17 10:29:54.181420] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.014 [2024-04-17 10:29:54.181438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.014 qpair failed and we were unable to recover it. 00:33:21.014 [2024-04-17 10:29:54.191308] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.014 [2024-04-17 10:29:54.191397] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.014 [2024-04-17 10:29:54.191418] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.014 [2024-04-17 10:29:54.191427] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.014 [2024-04-17 10:29:54.191436] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.014 [2024-04-17 10:29:54.191454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.014 qpair failed and we were unable to recover it. 00:33:21.014 [2024-04-17 10:29:54.201370] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.014 [2024-04-17 10:29:54.201460] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.014 [2024-04-17 10:29:54.201489] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.014 [2024-04-17 10:29:54.201498] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.014 [2024-04-17 10:29:54.201506] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.014 [2024-04-17 10:29:54.201525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.014 qpair failed and we were unable to recover it. 00:33:21.014 [2024-04-17 10:29:54.211327] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.014 [2024-04-17 10:29:54.211408] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.014 [2024-04-17 10:29:54.211430] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.014 [2024-04-17 10:29:54.211440] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.014 [2024-04-17 10:29:54.211448] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.014 [2024-04-17 10:29:54.211467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.014 qpair failed and we were unable to recover it. 00:33:21.014 [2024-04-17 10:29:54.221418] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.014 [2024-04-17 10:29:54.221504] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.015 [2024-04-17 10:29:54.221525] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.015 [2024-04-17 10:29:54.221534] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.015 [2024-04-17 10:29:54.221543] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.015 [2024-04-17 10:29:54.221561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.015 qpair failed and we were unable to recover it. 00:33:21.015 [2024-04-17 10:29:54.231432] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.015 [2024-04-17 10:29:54.231539] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.015 [2024-04-17 10:29:54.231559] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.015 [2024-04-17 10:29:54.231569] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.015 [2024-04-17 10:29:54.231578] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.015 [2024-04-17 10:29:54.231596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.015 qpair failed and we were unable to recover it. 00:33:21.015 [2024-04-17 10:29:54.241444] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.015 [2024-04-17 10:29:54.241533] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.015 [2024-04-17 10:29:54.241554] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.015 [2024-04-17 10:29:54.241563] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.015 [2024-04-17 10:29:54.241572] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.015 [2024-04-17 10:29:54.241594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.015 qpair failed and we were unable to recover it. 00:33:21.015 [2024-04-17 10:29:54.251481] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.015 [2024-04-17 10:29:54.251565] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.015 [2024-04-17 10:29:54.251585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.015 [2024-04-17 10:29:54.251595] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.015 [2024-04-17 10:29:54.251603] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.015 [2024-04-17 10:29:54.251621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.015 qpair failed and we were unable to recover it. 00:33:21.015 [2024-04-17 10:29:54.261527] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.015 [2024-04-17 10:29:54.261673] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.015 [2024-04-17 10:29:54.261694] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.015 [2024-04-17 10:29:54.261703] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.015 [2024-04-17 10:29:54.261712] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.015 [2024-04-17 10:29:54.261731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.015 qpair failed and we were unable to recover it. 00:33:21.015 [2024-04-17 10:29:54.271553] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.015 [2024-04-17 10:29:54.271686] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.015 [2024-04-17 10:29:54.271706] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.015 [2024-04-17 10:29:54.271715] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.015 [2024-04-17 10:29:54.271724] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.015 [2024-04-17 10:29:54.271743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.015 qpair failed and we were unable to recover it. 00:33:21.015 [2024-04-17 10:29:54.281584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.015 [2024-04-17 10:29:54.281676] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.015 [2024-04-17 10:29:54.281697] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.015 [2024-04-17 10:29:54.281706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.015 [2024-04-17 10:29:54.281715] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.015 [2024-04-17 10:29:54.281734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.015 qpair failed and we were unable to recover it. 00:33:21.015 [2024-04-17 10:29:54.291565] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.015 [2024-04-17 10:29:54.291680] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.015 [2024-04-17 10:29:54.291704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.015 [2024-04-17 10:29:54.291714] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.015 [2024-04-17 10:29:54.291722] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.015 [2024-04-17 10:29:54.291741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.015 qpair failed and we were unable to recover it. 00:33:21.015 [2024-04-17 10:29:54.301660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.015 [2024-04-17 10:29:54.301757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.015 [2024-04-17 10:29:54.301777] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.015 [2024-04-17 10:29:54.301786] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.015 [2024-04-17 10:29:54.301795] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.015 [2024-04-17 10:29:54.301814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.015 qpair failed and we were unable to recover it. 00:33:21.015 [2024-04-17 10:29:54.311676] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.015 [2024-04-17 10:29:54.311784] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.015 [2024-04-17 10:29:54.311803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.015 [2024-04-17 10:29:54.311813] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.015 [2024-04-17 10:29:54.311821] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.015 [2024-04-17 10:29:54.311840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.015 qpair failed and we were unable to recover it. 00:33:21.015 [2024-04-17 10:29:54.321707] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.015 [2024-04-17 10:29:54.321792] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.015 [2024-04-17 10:29:54.321813] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.015 [2024-04-17 10:29:54.321822] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.015 [2024-04-17 10:29:54.321830] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.015 [2024-04-17 10:29:54.321849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.015 qpair failed and we were unable to recover it. 00:33:21.015 [2024-04-17 10:29:54.331746] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.015 [2024-04-17 10:29:54.331872] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.015 [2024-04-17 10:29:54.331892] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.015 [2024-04-17 10:29:54.331902] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.015 [2024-04-17 10:29:54.331914] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.015 [2024-04-17 10:29:54.331933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.015 qpair failed and we were unable to recover it. 00:33:21.015 [2024-04-17 10:29:54.341755] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.015 [2024-04-17 10:29:54.341843] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.015 [2024-04-17 10:29:54.341863] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.015 [2024-04-17 10:29:54.341872] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.015 [2024-04-17 10:29:54.341881] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.015 [2024-04-17 10:29:54.341899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.015 qpair failed and we were unable to recover it. 00:33:21.275 [2024-04-17 10:29:54.351828] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.275 [2024-04-17 10:29:54.351950] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.275 [2024-04-17 10:29:54.351973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.275 [2024-04-17 10:29:54.351982] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.275 [2024-04-17 10:29:54.351991] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.275 [2024-04-17 10:29:54.352011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.275 qpair failed and we were unable to recover it. 00:33:21.275 [2024-04-17 10:29:54.361839] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.275 [2024-04-17 10:29:54.361928] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.275 [2024-04-17 10:29:54.361949] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.275 [2024-04-17 10:29:54.361958] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.275 [2024-04-17 10:29:54.361966] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.275 [2024-04-17 10:29:54.361985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.275 qpair failed and we were unable to recover it. 00:33:21.275 [2024-04-17 10:29:54.371990] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.275 [2024-04-17 10:29:54.372092] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.275 [2024-04-17 10:29:54.372113] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.275 [2024-04-17 10:29:54.372122] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.275 [2024-04-17 10:29:54.372131] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.275 [2024-04-17 10:29:54.372149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.275 qpair failed and we were unable to recover it. 00:33:21.275 [2024-04-17 10:29:54.382017] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.275 [2024-04-17 10:29:54.382129] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.275 [2024-04-17 10:29:54.382153] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.275 [2024-04-17 10:29:54.382162] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.275 [2024-04-17 10:29:54.382171] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.275 [2024-04-17 10:29:54.382189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.275 qpair failed and we were unable to recover it. 00:33:21.275 [2024-04-17 10:29:54.391950] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.275 [2024-04-17 10:29:54.392067] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.275 [2024-04-17 10:29:54.392087] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.275 [2024-04-17 10:29:54.392096] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.275 [2024-04-17 10:29:54.392105] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.275 [2024-04-17 10:29:54.392123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.275 qpair failed and we were unable to recover it. 00:33:21.275 [2024-04-17 10:29:54.401991] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.275 [2024-04-17 10:29:54.402081] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.275 [2024-04-17 10:29:54.402102] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.275 [2024-04-17 10:29:54.402113] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.275 [2024-04-17 10:29:54.402123] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.275 [2024-04-17 10:29:54.402142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.275 qpair failed and we were unable to recover it. 00:33:21.275 [2024-04-17 10:29:54.411985] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.275 [2024-04-17 10:29:54.412069] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.275 [2024-04-17 10:29:54.412090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.275 [2024-04-17 10:29:54.412099] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.275 [2024-04-17 10:29:54.412108] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.275 [2024-04-17 10:29:54.412127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.275 qpair failed and we were unable to recover it. 00:33:21.275 [2024-04-17 10:29:54.422003] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.275 [2024-04-17 10:29:54.422089] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.275 [2024-04-17 10:29:54.422109] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.275 [2024-04-17 10:29:54.422118] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.275 [2024-04-17 10:29:54.422131] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.275 [2024-04-17 10:29:54.422149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.275 qpair failed and we were unable to recover it. 00:33:21.275 [2024-04-17 10:29:54.431990] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.275 [2024-04-17 10:29:54.432091] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.275 [2024-04-17 10:29:54.432111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.275 [2024-04-17 10:29:54.432121] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.275 [2024-04-17 10:29:54.432129] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.275 [2024-04-17 10:29:54.432148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.275 qpair failed and we were unable to recover it. 00:33:21.275 [2024-04-17 10:29:54.442058] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.275 [2024-04-17 10:29:54.442145] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.275 [2024-04-17 10:29:54.442166] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.275 [2024-04-17 10:29:54.442175] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.275 [2024-04-17 10:29:54.442183] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.275 [2024-04-17 10:29:54.442201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.275 qpair failed and we were unable to recover it. 00:33:21.275 [2024-04-17 10:29:54.452091] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.275 [2024-04-17 10:29:54.452177] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.275 [2024-04-17 10:29:54.452197] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.275 [2024-04-17 10:29:54.452206] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.275 [2024-04-17 10:29:54.452214] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.275 [2024-04-17 10:29:54.452232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.275 qpair failed and we were unable to recover it. 00:33:21.275 [2024-04-17 10:29:54.462125] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.275 [2024-04-17 10:29:54.462214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.275 [2024-04-17 10:29:54.462234] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.275 [2024-04-17 10:29:54.462243] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.275 [2024-04-17 10:29:54.462251] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.275 [2024-04-17 10:29:54.462269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.275 qpair failed and we were unable to recover it. 00:33:21.275 [2024-04-17 10:29:54.472176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.275 [2024-04-17 10:29:54.472273] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.275 [2024-04-17 10:29:54.472294] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.275 [2024-04-17 10:29:54.472304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.275 [2024-04-17 10:29:54.472312] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.275 [2024-04-17 10:29:54.472330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.275 qpair failed and we were unable to recover it. 00:33:21.275 [2024-04-17 10:29:54.482168] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.275 [2024-04-17 10:29:54.482254] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.275 [2024-04-17 10:29:54.482274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.275 [2024-04-17 10:29:54.482283] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.275 [2024-04-17 10:29:54.482291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.275 [2024-04-17 10:29:54.482309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.275 qpair failed and we were unable to recover it. 00:33:21.275 [2024-04-17 10:29:54.492214] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.275 [2024-04-17 10:29:54.492303] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.275 [2024-04-17 10:29:54.492324] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.275 [2024-04-17 10:29:54.492333] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.276 [2024-04-17 10:29:54.492341] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.276 [2024-04-17 10:29:54.492360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.276 qpair failed and we were unable to recover it. 00:33:21.276 [2024-04-17 10:29:54.502252] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.276 [2024-04-17 10:29:54.502339] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.276 [2024-04-17 10:29:54.502359] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.276 [2024-04-17 10:29:54.502369] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.276 [2024-04-17 10:29:54.502378] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.276 [2024-04-17 10:29:54.502396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.276 qpair failed and we were unable to recover it. 00:33:21.276 [2024-04-17 10:29:54.512299] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.276 [2024-04-17 10:29:54.512389] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.276 [2024-04-17 10:29:54.512409] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.276 [2024-04-17 10:29:54.512418] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.276 [2024-04-17 10:29:54.512430] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.276 [2024-04-17 10:29:54.512449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.276 qpair failed and we were unable to recover it. 00:33:21.276 [2024-04-17 10:29:54.522298] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.276 [2024-04-17 10:29:54.522395] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.276 [2024-04-17 10:29:54.522416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.276 [2024-04-17 10:29:54.522426] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.276 [2024-04-17 10:29:54.522434] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.276 [2024-04-17 10:29:54.522452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.276 qpair failed and we were unable to recover it. 00:33:21.276 [2024-04-17 10:29:54.532334] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.276 [2024-04-17 10:29:54.532459] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.276 [2024-04-17 10:29:54.532479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.276 [2024-04-17 10:29:54.532489] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.276 [2024-04-17 10:29:54.532497] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.276 [2024-04-17 10:29:54.532516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.276 qpair failed and we were unable to recover it. 00:33:21.276 [2024-04-17 10:29:54.542346] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.276 [2024-04-17 10:29:54.542434] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.276 [2024-04-17 10:29:54.542454] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.276 [2024-04-17 10:29:54.542463] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.276 [2024-04-17 10:29:54.542472] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.276 [2024-04-17 10:29:54.542490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.276 qpair failed and we were unable to recover it. 00:33:21.276 [2024-04-17 10:29:54.552368] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.276 [2024-04-17 10:29:54.552452] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.276 [2024-04-17 10:29:54.552473] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.276 [2024-04-17 10:29:54.552483] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.276 [2024-04-17 10:29:54.552491] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.276 [2024-04-17 10:29:54.552509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.276 qpair failed and we were unable to recover it. 00:33:21.276 [2024-04-17 10:29:54.562413] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.276 [2024-04-17 10:29:54.562527] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.276 [2024-04-17 10:29:54.562548] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.276 [2024-04-17 10:29:54.562557] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.276 [2024-04-17 10:29:54.562566] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.276 [2024-04-17 10:29:54.562584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.276 qpair failed and we were unable to recover it. 00:33:21.276 [2024-04-17 10:29:54.572501] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.276 [2024-04-17 10:29:54.572590] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.276 [2024-04-17 10:29:54.572611] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.276 [2024-04-17 10:29:54.572620] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.276 [2024-04-17 10:29:54.572628] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.276 [2024-04-17 10:29:54.572651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.276 qpair failed and we were unable to recover it. 00:33:21.276 [2024-04-17 10:29:54.582486] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.276 [2024-04-17 10:29:54.582572] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.276 [2024-04-17 10:29:54.582592] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.276 [2024-04-17 10:29:54.582601] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.276 [2024-04-17 10:29:54.582610] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.276 [2024-04-17 10:29:54.582627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.276 qpair failed and we were unable to recover it. 00:33:21.276 [2024-04-17 10:29:54.592488] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.276 [2024-04-17 10:29:54.592578] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.276 [2024-04-17 10:29:54.592598] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.276 [2024-04-17 10:29:54.592607] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.276 [2024-04-17 10:29:54.592615] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.276 [2024-04-17 10:29:54.592633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.276 qpair failed and we were unable to recover it. 00:33:21.276 [2024-04-17 10:29:54.602517] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.276 [2024-04-17 10:29:54.602602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.276 [2024-04-17 10:29:54.602623] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.276 [2024-04-17 10:29:54.602632] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.276 [2024-04-17 10:29:54.602650] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.276 [2024-04-17 10:29:54.602669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.276 qpair failed and we were unable to recover it. 00:33:21.536 [2024-04-17 10:29:54.612596] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.536 [2024-04-17 10:29:54.612691] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.536 [2024-04-17 10:29:54.612711] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.536 [2024-04-17 10:29:54.612721] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.536 [2024-04-17 10:29:54.612729] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.536 [2024-04-17 10:29:54.612747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.536 qpair failed and we were unable to recover it. 00:33:21.536 [2024-04-17 10:29:54.622588] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.536 [2024-04-17 10:29:54.622689] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.536 [2024-04-17 10:29:54.622709] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.536 [2024-04-17 10:29:54.622719] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.536 [2024-04-17 10:29:54.622727] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.536 [2024-04-17 10:29:54.622745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.536 qpair failed and we were unable to recover it. 00:33:21.536 [2024-04-17 10:29:54.632593] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.536 [2024-04-17 10:29:54.632701] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.536 [2024-04-17 10:29:54.632722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.536 [2024-04-17 10:29:54.632733] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.536 [2024-04-17 10:29:54.632742] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.536 [2024-04-17 10:29:54.632760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.536 qpair failed and we were unable to recover it. 00:33:21.536 [2024-04-17 10:29:54.642612] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.536 [2024-04-17 10:29:54.642707] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.536 [2024-04-17 10:29:54.642728] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.536 [2024-04-17 10:29:54.642737] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.536 [2024-04-17 10:29:54.642745] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.536 [2024-04-17 10:29:54.642764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.536 qpair failed and we were unable to recover it. 00:33:21.536 [2024-04-17 10:29:54.652668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.536 [2024-04-17 10:29:54.652757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.536 [2024-04-17 10:29:54.652777] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.536 [2024-04-17 10:29:54.652787] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.536 [2024-04-17 10:29:54.652795] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.536 [2024-04-17 10:29:54.652813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.536 qpair failed and we were unable to recover it. 00:33:21.536 [2024-04-17 10:29:54.662726] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.536 [2024-04-17 10:29:54.662863] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.536 [2024-04-17 10:29:54.662883] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.536 [2024-04-17 10:29:54.662892] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.536 [2024-04-17 10:29:54.662901] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.536 [2024-04-17 10:29:54.662919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.536 qpair failed and we were unable to recover it. 00:33:21.536 [2024-04-17 10:29:54.672724] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.536 [2024-04-17 10:29:54.672815] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.536 [2024-04-17 10:29:54.672835] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.536 [2024-04-17 10:29:54.672844] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.536 [2024-04-17 10:29:54.672853] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.536 [2024-04-17 10:29:54.672871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.536 qpair failed and we were unable to recover it. 00:33:21.536 [2024-04-17 10:29:54.682733] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.536 [2024-04-17 10:29:54.682832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.536 [2024-04-17 10:29:54.682853] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.536 [2024-04-17 10:29:54.682862] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.536 [2024-04-17 10:29:54.682871] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.536 [2024-04-17 10:29:54.682889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.536 qpair failed and we were unable to recover it. 00:33:21.536 [2024-04-17 10:29:54.692752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.536 [2024-04-17 10:29:54.692841] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.536 [2024-04-17 10:29:54.692861] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.536 [2024-04-17 10:29:54.692870] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.536 [2024-04-17 10:29:54.692883] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.536 [2024-04-17 10:29:54.692902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.536 qpair failed and we were unable to recover it. 00:33:21.536 [2024-04-17 10:29:54.702772] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.536 [2024-04-17 10:29:54.702870] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.536 [2024-04-17 10:29:54.702890] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.536 [2024-04-17 10:29:54.702900] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.536 [2024-04-17 10:29:54.702909] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.536 [2024-04-17 10:29:54.702927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.536 qpair failed and we were unable to recover it. 00:33:21.536 [2024-04-17 10:29:54.712839] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.536 [2024-04-17 10:29:54.712958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.536 [2024-04-17 10:29:54.712979] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.536 [2024-04-17 10:29:54.712989] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.536 [2024-04-17 10:29:54.712997] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.536 [2024-04-17 10:29:54.713017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.536 qpair failed and we were unable to recover it. 00:33:21.537 [2024-04-17 10:29:54.722816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.537 [2024-04-17 10:29:54.722907] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.537 [2024-04-17 10:29:54.722928] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.537 [2024-04-17 10:29:54.722937] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.537 [2024-04-17 10:29:54.722946] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.537 [2024-04-17 10:29:54.722965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.537 qpair failed and we were unable to recover it. 00:33:21.537 [2024-04-17 10:29:54.732901] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.537 [2024-04-17 10:29:54.733031] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.537 [2024-04-17 10:29:54.733050] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.537 [2024-04-17 10:29:54.733060] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.537 [2024-04-17 10:29:54.733069] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.537 [2024-04-17 10:29:54.733088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.537 qpair failed and we were unable to recover it. 00:33:21.537 [2024-04-17 10:29:54.742925] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.537 [2024-04-17 10:29:54.743016] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.537 [2024-04-17 10:29:54.743036] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.537 [2024-04-17 10:29:54.743046] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.537 [2024-04-17 10:29:54.743054] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.537 [2024-04-17 10:29:54.743072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.537 qpair failed and we were unable to recover it. 00:33:21.537 [2024-04-17 10:29:54.752999] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.537 [2024-04-17 10:29:54.753142] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.537 [2024-04-17 10:29:54.753163] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.537 [2024-04-17 10:29:54.753172] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.537 [2024-04-17 10:29:54.753181] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.537 [2024-04-17 10:29:54.753199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.537 qpair failed and we were unable to recover it. 00:33:21.537 [2024-04-17 10:29:54.763002] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.537 [2024-04-17 10:29:54.763095] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.537 [2024-04-17 10:29:54.763115] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.537 [2024-04-17 10:29:54.763125] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.537 [2024-04-17 10:29:54.763133] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.537 [2024-04-17 10:29:54.763154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.537 qpair failed and we were unable to recover it. 00:33:21.537 [2024-04-17 10:29:54.773009] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.537 [2024-04-17 10:29:54.773096] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.537 [2024-04-17 10:29:54.773117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.537 [2024-04-17 10:29:54.773126] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.537 [2024-04-17 10:29:54.773134] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.537 [2024-04-17 10:29:54.773153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.537 qpair failed and we were unable to recover it. 00:33:21.537 [2024-04-17 10:29:54.783022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.537 [2024-04-17 10:29:54.783109] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.537 [2024-04-17 10:29:54.783130] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.537 [2024-04-17 10:29:54.783140] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.537 [2024-04-17 10:29:54.783152] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.537 [2024-04-17 10:29:54.783171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.537 qpair failed and we were unable to recover it. 00:33:21.537 [2024-04-17 10:29:54.793060] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.537 [2024-04-17 10:29:54.793151] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.537 [2024-04-17 10:29:54.793172] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.537 [2024-04-17 10:29:54.793181] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.537 [2024-04-17 10:29:54.793190] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.537 [2024-04-17 10:29:54.793208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.537 qpair failed and we were unable to recover it. 00:33:21.537 [2024-04-17 10:29:54.803091] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.537 [2024-04-17 10:29:54.803183] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.537 [2024-04-17 10:29:54.803204] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.537 [2024-04-17 10:29:54.803213] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.537 [2024-04-17 10:29:54.803221] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.537 [2024-04-17 10:29:54.803239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.537 qpair failed and we were unable to recover it. 00:33:21.537 [2024-04-17 10:29:54.813131] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.537 [2024-04-17 10:29:54.813221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.537 [2024-04-17 10:29:54.813241] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.537 [2024-04-17 10:29:54.813251] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.537 [2024-04-17 10:29:54.813259] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.537 [2024-04-17 10:29:54.813277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.537 qpair failed and we were unable to recover it. 00:33:21.537 [2024-04-17 10:29:54.823168] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.537 [2024-04-17 10:29:54.823251] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.537 [2024-04-17 10:29:54.823271] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.537 [2024-04-17 10:29:54.823281] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.537 [2024-04-17 10:29:54.823289] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.537 [2024-04-17 10:29:54.823307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.537 qpair failed and we were unable to recover it. 00:33:21.537 [2024-04-17 10:29:54.833280] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.537 [2024-04-17 10:29:54.833374] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.537 [2024-04-17 10:29:54.833394] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.537 [2024-04-17 10:29:54.833403] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.537 [2024-04-17 10:29:54.833411] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.537 [2024-04-17 10:29:54.833430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.537 qpair failed and we were unable to recover it. 00:33:21.537 [2024-04-17 10:29:54.843232] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.538 [2024-04-17 10:29:54.843316] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.538 [2024-04-17 10:29:54.843336] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.538 [2024-04-17 10:29:54.843346] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.538 [2024-04-17 10:29:54.843354] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.538 [2024-04-17 10:29:54.843373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.538 qpair failed and we were unable to recover it. 00:33:21.538 [2024-04-17 10:29:54.853255] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.538 [2024-04-17 10:29:54.853343] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.538 [2024-04-17 10:29:54.853363] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.538 [2024-04-17 10:29:54.853373] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.538 [2024-04-17 10:29:54.853382] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.538 [2024-04-17 10:29:54.853400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.538 qpair failed and we were unable to recover it. 00:33:21.538 [2024-04-17 10:29:54.863273] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.538 [2024-04-17 10:29:54.863360] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.538 [2024-04-17 10:29:54.863381] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.538 [2024-04-17 10:29:54.863391] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.538 [2024-04-17 10:29:54.863400] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.538 [2024-04-17 10:29:54.863419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.538 qpair failed and we were unable to recover it. 00:33:21.797 [2024-04-17 10:29:54.873275] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.797 [2024-04-17 10:29:54.873368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.797 [2024-04-17 10:29:54.873388] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.797 [2024-04-17 10:29:54.873403] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.797 [2024-04-17 10:29:54.873411] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.797 [2024-04-17 10:29:54.873431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.797 qpair failed and we were unable to recover it. 00:33:21.797 [2024-04-17 10:29:54.883345] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.797 [2024-04-17 10:29:54.883471] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.797 [2024-04-17 10:29:54.883492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.797 [2024-04-17 10:29:54.883502] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.797 [2024-04-17 10:29:54.883511] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.797 [2024-04-17 10:29:54.883531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.797 qpair failed and we were unable to recover it. 00:33:21.797 [2024-04-17 10:29:54.893446] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.797 [2024-04-17 10:29:54.893576] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.797 [2024-04-17 10:29:54.893597] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.797 [2024-04-17 10:29:54.893607] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.797 [2024-04-17 10:29:54.893615] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.797 [2024-04-17 10:29:54.893634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.797 qpair failed and we were unable to recover it. 00:33:21.797 [2024-04-17 10:29:54.903415] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.797 [2024-04-17 10:29:54.903507] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.797 [2024-04-17 10:29:54.903527] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.797 [2024-04-17 10:29:54.903538] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.797 [2024-04-17 10:29:54.903547] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.797 [2024-04-17 10:29:54.903567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.797 qpair failed and we were unable to recover it. 00:33:21.797 [2024-04-17 10:29:54.913488] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.797 [2024-04-17 10:29:54.913628] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.797 [2024-04-17 10:29:54.913656] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.797 [2024-04-17 10:29:54.913667] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.797 [2024-04-17 10:29:54.913676] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.797 [2024-04-17 10:29:54.913695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.797 qpair failed and we were unable to recover it. 00:33:21.797 [2024-04-17 10:29:54.923455] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.797 [2024-04-17 10:29:54.923548] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.797 [2024-04-17 10:29:54.923569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.797 [2024-04-17 10:29:54.923579] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.797 [2024-04-17 10:29:54.923587] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.797 [2024-04-17 10:29:54.923605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.797 qpair failed and we were unable to recover it. 00:33:21.797 [2024-04-17 10:29:54.933501] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.798 [2024-04-17 10:29:54.933640] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.798 [2024-04-17 10:29:54.933667] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.798 [2024-04-17 10:29:54.933677] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.798 [2024-04-17 10:29:54.933685] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.798 [2024-04-17 10:29:54.933704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.798 qpair failed and we were unable to recover it. 00:33:21.798 [2024-04-17 10:29:54.943545] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.798 [2024-04-17 10:29:54.943628] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.798 [2024-04-17 10:29:54.943652] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.798 [2024-04-17 10:29:54.943662] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.798 [2024-04-17 10:29:54.943671] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.798 [2024-04-17 10:29:54.943690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.798 qpair failed and we were unable to recover it. 00:33:21.798 [2024-04-17 10:29:54.953559] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.798 [2024-04-17 10:29:54.953656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.798 [2024-04-17 10:29:54.953677] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.798 [2024-04-17 10:29:54.953687] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.798 [2024-04-17 10:29:54.953696] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.798 [2024-04-17 10:29:54.953715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.798 qpair failed and we were unable to recover it. 00:33:21.798 [2024-04-17 10:29:54.963515] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.798 [2024-04-17 10:29:54.963604] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.798 [2024-04-17 10:29:54.963625] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.798 [2024-04-17 10:29:54.963639] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.798 [2024-04-17 10:29:54.963653] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.798 [2024-04-17 10:29:54.963672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.798 qpair failed and we were unable to recover it. 00:33:21.798 [2024-04-17 10:29:54.973610] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.798 [2024-04-17 10:29:54.973705] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.798 [2024-04-17 10:29:54.973726] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.798 [2024-04-17 10:29:54.973737] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.798 [2024-04-17 10:29:54.973745] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.798 [2024-04-17 10:29:54.973766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.798 qpair failed and we were unable to recover it. 00:33:21.798 [2024-04-17 10:29:54.983676] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.798 [2024-04-17 10:29:54.983768] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.798 [2024-04-17 10:29:54.983789] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.798 [2024-04-17 10:29:54.983798] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.798 [2024-04-17 10:29:54.983807] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.798 [2024-04-17 10:29:54.983826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.798 qpair failed and we were unable to recover it. 00:33:21.798 [2024-04-17 10:29:54.993674] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.798 [2024-04-17 10:29:54.993765] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.798 [2024-04-17 10:29:54.993785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.798 [2024-04-17 10:29:54.993795] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.798 [2024-04-17 10:29:54.993804] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.798 [2024-04-17 10:29:54.993823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.798 qpair failed and we were unable to recover it. 00:33:21.798 [2024-04-17 10:29:55.003681] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.798 [2024-04-17 10:29:55.003764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.798 [2024-04-17 10:29:55.003785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.798 [2024-04-17 10:29:55.003795] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.798 [2024-04-17 10:29:55.003804] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.798 [2024-04-17 10:29:55.003823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.798 qpair failed and we were unable to recover it. 00:33:21.798 [2024-04-17 10:29:55.013727] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.798 [2024-04-17 10:29:55.013811] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.798 [2024-04-17 10:29:55.013832] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.798 [2024-04-17 10:29:55.013842] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.798 [2024-04-17 10:29:55.013851] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.798 [2024-04-17 10:29:55.013870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.798 qpair failed and we were unable to recover it. 00:33:21.798 [2024-04-17 10:29:55.023758] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.798 [2024-04-17 10:29:55.023851] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.798 [2024-04-17 10:29:55.023871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.798 [2024-04-17 10:29:55.023881] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.798 [2024-04-17 10:29:55.023890] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.798 [2024-04-17 10:29:55.023909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.798 qpair failed and we were unable to recover it. 00:33:21.798 [2024-04-17 10:29:55.033784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.798 [2024-04-17 10:29:55.033912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.798 [2024-04-17 10:29:55.033933] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.798 [2024-04-17 10:29:55.033943] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.798 [2024-04-17 10:29:55.033953] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.798 [2024-04-17 10:29:55.033972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.798 qpair failed and we were unable to recover it. 00:33:21.798 [2024-04-17 10:29:55.043806] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.798 [2024-04-17 10:29:55.043897] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.798 [2024-04-17 10:29:55.043918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.798 [2024-04-17 10:29:55.043927] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.798 [2024-04-17 10:29:55.043936] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.798 [2024-04-17 10:29:55.043956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.798 qpair failed and we were unable to recover it. 00:33:21.798 [2024-04-17 10:29:55.053831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.798 [2024-04-17 10:29:55.053948] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.798 [2024-04-17 10:29:55.053968] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.798 [2024-04-17 10:29:55.053985] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.798 [2024-04-17 10:29:55.053994] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.798 [2024-04-17 10:29:55.054013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.798 qpair failed and we were unable to recover it. 00:33:21.798 [2024-04-17 10:29:55.063896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.798 [2024-04-17 10:29:55.063986] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.798 [2024-04-17 10:29:55.064006] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.798 [2024-04-17 10:29:55.064016] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.799 [2024-04-17 10:29:55.064025] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.799 [2024-04-17 10:29:55.064044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.799 qpair failed and we were unable to recover it. 00:33:21.799 [2024-04-17 10:29:55.073936] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.799 [2024-04-17 10:29:55.074032] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.799 [2024-04-17 10:29:55.074053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.799 [2024-04-17 10:29:55.074063] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.799 [2024-04-17 10:29:55.074072] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.799 [2024-04-17 10:29:55.074091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.799 qpair failed and we were unable to recover it. 00:33:21.799 [2024-04-17 10:29:55.083948] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.799 [2024-04-17 10:29:55.084029] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.799 [2024-04-17 10:29:55.084049] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.799 [2024-04-17 10:29:55.084060] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.799 [2024-04-17 10:29:55.084068] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.799 [2024-04-17 10:29:55.084087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.799 qpair failed and we were unable to recover it. 00:33:21.799 [2024-04-17 10:29:55.093971] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.799 [2024-04-17 10:29:55.094057] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.799 [2024-04-17 10:29:55.094077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.799 [2024-04-17 10:29:55.094087] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.799 [2024-04-17 10:29:55.094096] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.799 [2024-04-17 10:29:55.094114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.799 qpair failed and we were unable to recover it. 00:33:21.799 [2024-04-17 10:29:55.104043] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.799 [2024-04-17 10:29:55.104133] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.799 [2024-04-17 10:29:55.104153] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.799 [2024-04-17 10:29:55.104164] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.799 [2024-04-17 10:29:55.104172] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.799 [2024-04-17 10:29:55.104191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.799 qpair failed and we were unable to recover it. 00:33:21.799 [2024-04-17 10:29:55.114026] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.799 [2024-04-17 10:29:55.114133] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.799 [2024-04-17 10:29:55.114154] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.799 [2024-04-17 10:29:55.114164] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.799 [2024-04-17 10:29:55.114174] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.799 [2024-04-17 10:29:55.114192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.799 qpair failed and we were unable to recover it. 00:33:21.799 [2024-04-17 10:29:55.124066] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:21.799 [2024-04-17 10:29:55.124153] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:21.799 [2024-04-17 10:29:55.124173] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:21.799 [2024-04-17 10:29:55.124182] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:21.799 [2024-04-17 10:29:55.124191] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:21.799 [2024-04-17 10:29:55.124209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.799 qpair failed and we were unable to recover it. 00:33:22.058 [2024-04-17 10:29:55.134129] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.058 [2024-04-17 10:29:55.134215] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.058 [2024-04-17 10:29:55.134235] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.058 [2024-04-17 10:29:55.134245] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.058 [2024-04-17 10:29:55.134254] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.058 [2024-04-17 10:29:55.134272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.058 qpair failed and we were unable to recover it. 00:33:22.059 [2024-04-17 10:29:55.144136] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.059 [2024-04-17 10:29:55.144221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.059 [2024-04-17 10:29:55.144242] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.059 [2024-04-17 10:29:55.144255] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.059 [2024-04-17 10:29:55.144264] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.059 [2024-04-17 10:29:55.144283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.059 qpair failed and we were unable to recover it. 00:33:22.059 [2024-04-17 10:29:55.154161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.059 [2024-04-17 10:29:55.154252] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.059 [2024-04-17 10:29:55.154273] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.059 [2024-04-17 10:29:55.154283] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.059 [2024-04-17 10:29:55.154292] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.059 [2024-04-17 10:29:55.154312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.059 qpair failed and we were unable to recover it. 00:33:22.059 [2024-04-17 10:29:55.164108] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.059 [2024-04-17 10:29:55.164198] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.059 [2024-04-17 10:29:55.164218] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.059 [2024-04-17 10:29:55.164229] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.059 [2024-04-17 10:29:55.164238] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.059 [2024-04-17 10:29:55.164256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.059 qpair failed and we were unable to recover it. 00:33:22.059 [2024-04-17 10:29:55.174217] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.059 [2024-04-17 10:29:55.174303] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.059 [2024-04-17 10:29:55.174324] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.059 [2024-04-17 10:29:55.174334] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.059 [2024-04-17 10:29:55.174344] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.059 [2024-04-17 10:29:55.174362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.059 qpair failed and we were unable to recover it. 00:33:22.059 [2024-04-17 10:29:55.184262] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.059 [2024-04-17 10:29:55.184359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.059 [2024-04-17 10:29:55.184381] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.059 [2024-04-17 10:29:55.184391] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.059 [2024-04-17 10:29:55.184400] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.059 [2024-04-17 10:29:55.184419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.059 qpair failed and we were unable to recover it. 00:33:22.059 [2024-04-17 10:29:55.194287] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.059 [2024-04-17 10:29:55.194377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.059 [2024-04-17 10:29:55.194398] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.059 [2024-04-17 10:29:55.194408] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.059 [2024-04-17 10:29:55.194417] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.059 [2024-04-17 10:29:55.194436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.059 qpair failed and we were unable to recover it. 00:33:22.059 [2024-04-17 10:29:55.204285] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.059 [2024-04-17 10:29:55.204370] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.059 [2024-04-17 10:29:55.204391] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.059 [2024-04-17 10:29:55.204402] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.059 [2024-04-17 10:29:55.204411] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.059 [2024-04-17 10:29:55.204430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.059 qpair failed and we were unable to recover it. 00:33:22.059 [2024-04-17 10:29:55.214370] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.059 [2024-04-17 10:29:55.214507] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.059 [2024-04-17 10:29:55.214528] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.059 [2024-04-17 10:29:55.214538] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.059 [2024-04-17 10:29:55.214547] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.059 [2024-04-17 10:29:55.214566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.059 qpair failed and we were unable to recover it. 00:33:22.059 [2024-04-17 10:29:55.224400] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.059 [2024-04-17 10:29:55.224518] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.059 [2024-04-17 10:29:55.224539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.059 [2024-04-17 10:29:55.224549] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.059 [2024-04-17 10:29:55.224557] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.059 [2024-04-17 10:29:55.224576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.059 qpair failed and we were unable to recover it. 00:33:22.059 [2024-04-17 10:29:55.234409] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.059 [2024-04-17 10:29:55.234505] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.059 [2024-04-17 10:29:55.234525] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.059 [2024-04-17 10:29:55.234539] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.059 [2024-04-17 10:29:55.234548] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.059 [2024-04-17 10:29:55.234566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.059 qpair failed and we were unable to recover it. 00:33:22.059 [2024-04-17 10:29:55.244440] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.059 [2024-04-17 10:29:55.244524] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.059 [2024-04-17 10:29:55.244546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.059 [2024-04-17 10:29:55.244557] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.059 [2024-04-17 10:29:55.244566] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.059 [2024-04-17 10:29:55.244585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.059 qpair failed and we were unable to recover it. 00:33:22.059 [2024-04-17 10:29:55.254404] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.059 [2024-04-17 10:29:55.254492] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.059 [2024-04-17 10:29:55.254514] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.059 [2024-04-17 10:29:55.254524] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.059 [2024-04-17 10:29:55.254532] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.059 [2024-04-17 10:29:55.254551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.059 qpair failed and we were unable to recover it. 00:33:22.059 [2024-04-17 10:29:55.264506] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.059 [2024-04-17 10:29:55.264595] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.059 [2024-04-17 10:29:55.264616] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.059 [2024-04-17 10:29:55.264626] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.059 [2024-04-17 10:29:55.264635] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.059 [2024-04-17 10:29:55.264658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.059 qpair failed and we were unable to recover it. 00:33:22.059 [2024-04-17 10:29:55.274595] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.059 [2024-04-17 10:29:55.274717] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.059 [2024-04-17 10:29:55.274738] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.060 [2024-04-17 10:29:55.274748] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.060 [2024-04-17 10:29:55.274757] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.060 [2024-04-17 10:29:55.274777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.060 qpair failed and we were unable to recover it. 00:33:22.060 [2024-04-17 10:29:55.284558] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.060 [2024-04-17 10:29:55.284641] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.060 [2024-04-17 10:29:55.284668] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.060 [2024-04-17 10:29:55.284678] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.060 [2024-04-17 10:29:55.284688] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.060 [2024-04-17 10:29:55.284707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.060 qpair failed and we were unable to recover it. 00:33:22.060 [2024-04-17 10:29:55.294592] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.060 [2024-04-17 10:29:55.294682] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.060 [2024-04-17 10:29:55.294703] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.060 [2024-04-17 10:29:55.294713] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.060 [2024-04-17 10:29:55.294722] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.060 [2024-04-17 10:29:55.294741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.060 qpair failed and we were unable to recover it. 00:33:22.060 [2024-04-17 10:29:55.304641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.060 [2024-04-17 10:29:55.304747] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.060 [2024-04-17 10:29:55.304768] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.060 [2024-04-17 10:29:55.304778] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.060 [2024-04-17 10:29:55.304787] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.060 [2024-04-17 10:29:55.304805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.060 qpair failed and we were unable to recover it. 00:33:22.060 [2024-04-17 10:29:55.314620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.060 [2024-04-17 10:29:55.314704] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.060 [2024-04-17 10:29:55.314726] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.060 [2024-04-17 10:29:55.314735] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.060 [2024-04-17 10:29:55.314744] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.060 [2024-04-17 10:29:55.314763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.060 qpair failed and we were unable to recover it. 00:33:22.060 [2024-04-17 10:29:55.324608] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.060 [2024-04-17 10:29:55.324708] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.060 [2024-04-17 10:29:55.324733] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.060 [2024-04-17 10:29:55.324744] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.060 [2024-04-17 10:29:55.324753] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.060 [2024-04-17 10:29:55.324772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.060 qpair failed and we were unable to recover it. 00:33:22.060 [2024-04-17 10:29:55.334711] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.060 [2024-04-17 10:29:55.334794] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.060 [2024-04-17 10:29:55.334816] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.060 [2024-04-17 10:29:55.334826] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.060 [2024-04-17 10:29:55.334834] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.060 [2024-04-17 10:29:55.334853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.060 qpair failed and we were unable to recover it. 00:33:22.060 [2024-04-17 10:29:55.344745] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.060 [2024-04-17 10:29:55.344881] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.060 [2024-04-17 10:29:55.344902] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.060 [2024-04-17 10:29:55.344912] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.060 [2024-04-17 10:29:55.344920] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.060 [2024-04-17 10:29:55.344939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.060 qpair failed and we were unable to recover it. 00:33:22.060 [2024-04-17 10:29:55.354773] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.060 [2024-04-17 10:29:55.354870] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.060 [2024-04-17 10:29:55.354893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.060 [2024-04-17 10:29:55.354904] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.060 [2024-04-17 10:29:55.354913] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.060 [2024-04-17 10:29:55.354933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.060 qpair failed and we were unable to recover it. 00:33:22.060 [2024-04-17 10:29:55.364834] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.060 [2024-04-17 10:29:55.364922] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.060 [2024-04-17 10:29:55.364942] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.060 [2024-04-17 10:29:55.364952] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.060 [2024-04-17 10:29:55.364961] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.060 [2024-04-17 10:29:55.364980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.060 qpair failed and we were unable to recover it. 00:33:22.060 [2024-04-17 10:29:55.374835] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.060 [2024-04-17 10:29:55.374922] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.060 [2024-04-17 10:29:55.374942] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.060 [2024-04-17 10:29:55.374952] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.060 [2024-04-17 10:29:55.374960] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.060 [2024-04-17 10:29:55.374980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.060 qpair failed and we were unable to recover it. 00:33:22.060 [2024-04-17 10:29:55.384909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.060 [2024-04-17 10:29:55.385001] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.060 [2024-04-17 10:29:55.385022] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.060 [2024-04-17 10:29:55.385032] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.060 [2024-04-17 10:29:55.385042] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.060 [2024-04-17 10:29:55.385061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.060 qpair failed and we were unable to recover it. 00:33:22.321 [2024-04-17 10:29:55.394869] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.321 [2024-04-17 10:29:55.394959] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.321 [2024-04-17 10:29:55.394980] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.321 [2024-04-17 10:29:55.394990] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.321 [2024-04-17 10:29:55.394999] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.321 [2024-04-17 10:29:55.395017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.321 qpair failed and we were unable to recover it. 00:33:22.321 [2024-04-17 10:29:55.404915] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.321 [2024-04-17 10:29:55.405043] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.321 [2024-04-17 10:29:55.405064] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.321 [2024-04-17 10:29:55.405074] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.321 [2024-04-17 10:29:55.405083] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.321 [2024-04-17 10:29:55.405102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.321 qpair failed and we were unable to recover it. 00:33:22.321 [2024-04-17 10:29:55.414961] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.321 [2024-04-17 10:29:55.415047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.321 [2024-04-17 10:29:55.415071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.321 [2024-04-17 10:29:55.415081] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.321 [2024-04-17 10:29:55.415090] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.321 [2024-04-17 10:29:55.415109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.321 qpair failed and we were unable to recover it. 00:33:22.321 [2024-04-17 10:29:55.424968] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.321 [2024-04-17 10:29:55.425068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.321 [2024-04-17 10:29:55.425089] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.321 [2024-04-17 10:29:55.425098] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.321 [2024-04-17 10:29:55.425108] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.321 [2024-04-17 10:29:55.425126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.321 qpair failed and we were unable to recover it. 00:33:22.321 [2024-04-17 10:29:55.434985] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.321 [2024-04-17 10:29:55.435104] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.321 [2024-04-17 10:29:55.435126] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.321 [2024-04-17 10:29:55.435137] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.321 [2024-04-17 10:29:55.435146] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.321 [2024-04-17 10:29:55.435164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.321 qpair failed and we were unable to recover it. 00:33:22.321 [2024-04-17 10:29:55.444963] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.321 [2024-04-17 10:29:55.445055] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.321 [2024-04-17 10:29:55.445077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.321 [2024-04-17 10:29:55.445088] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.321 [2024-04-17 10:29:55.445096] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.321 [2024-04-17 10:29:55.445115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.321 qpair failed and we were unable to recover it. 00:33:22.321 [2024-04-17 10:29:55.454992] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.321 [2024-04-17 10:29:55.455118] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.321 [2024-04-17 10:29:55.455139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.321 [2024-04-17 10:29:55.455149] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.321 [2024-04-17 10:29:55.455158] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.321 [2024-04-17 10:29:55.455177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.321 qpair failed and we were unable to recover it. 00:33:22.321 [2024-04-17 10:29:55.465088] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.321 [2024-04-17 10:29:55.465199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.321 [2024-04-17 10:29:55.465220] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.321 [2024-04-17 10:29:55.465229] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.321 [2024-04-17 10:29:55.465238] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.321 [2024-04-17 10:29:55.465257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.321 qpair failed and we were unable to recover it. 00:33:22.321 [2024-04-17 10:29:55.475057] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.321 [2024-04-17 10:29:55.475149] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.321 [2024-04-17 10:29:55.475170] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.321 [2024-04-17 10:29:55.475180] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.321 [2024-04-17 10:29:55.475190] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.321 [2024-04-17 10:29:55.475209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.321 qpair failed and we were unable to recover it. 00:33:22.321 [2024-04-17 10:29:55.485173] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.321 [2024-04-17 10:29:55.485305] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.321 [2024-04-17 10:29:55.485326] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.321 [2024-04-17 10:29:55.485336] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.321 [2024-04-17 10:29:55.485345] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.321 [2024-04-17 10:29:55.485364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.321 qpair failed and we were unable to recover it. 00:33:22.321 [2024-04-17 10:29:55.495229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.321 [2024-04-17 10:29:55.495325] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.321 [2024-04-17 10:29:55.495345] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.321 [2024-04-17 10:29:55.495355] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.321 [2024-04-17 10:29:55.495364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.321 [2024-04-17 10:29:55.495384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.321 qpair failed and we were unable to recover it. 00:33:22.321 [2024-04-17 10:29:55.505153] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.321 [2024-04-17 10:29:55.505271] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.321 [2024-04-17 10:29:55.505296] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.321 [2024-04-17 10:29:55.505306] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.322 [2024-04-17 10:29:55.505315] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.322 [2024-04-17 10:29:55.505334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.322 qpair failed and we were unable to recover it. 00:33:22.322 [2024-04-17 10:29:55.515176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.322 [2024-04-17 10:29:55.515263] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.322 [2024-04-17 10:29:55.515284] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.322 [2024-04-17 10:29:55.515295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.322 [2024-04-17 10:29:55.515303] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.322 [2024-04-17 10:29:55.515322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.322 qpair failed and we were unable to recover it. 00:33:22.322 [2024-04-17 10:29:55.525347] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.322 [2024-04-17 10:29:55.525435] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.322 [2024-04-17 10:29:55.525456] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.322 [2024-04-17 10:29:55.525466] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.322 [2024-04-17 10:29:55.525474] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.322 [2024-04-17 10:29:55.525493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.322 qpair failed and we were unable to recover it. 00:33:22.322 [2024-04-17 10:29:55.535306] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.322 [2024-04-17 10:29:55.535391] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.322 [2024-04-17 10:29:55.535412] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.322 [2024-04-17 10:29:55.535421] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.322 [2024-04-17 10:29:55.535430] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.322 [2024-04-17 10:29:55.535450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.322 qpair failed and we were unable to recover it. 00:33:22.322 [2024-04-17 10:29:55.545398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.322 [2024-04-17 10:29:55.545490] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.322 [2024-04-17 10:29:55.545511] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.322 [2024-04-17 10:29:55.545521] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.322 [2024-04-17 10:29:55.545531] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.322 [2024-04-17 10:29:55.545553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.322 qpair failed and we were unable to recover it. 00:33:22.322 [2024-04-17 10:29:55.555426] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.322 [2024-04-17 10:29:55.555515] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.322 [2024-04-17 10:29:55.555536] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.322 [2024-04-17 10:29:55.555546] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.322 [2024-04-17 10:29:55.555554] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.322 [2024-04-17 10:29:55.555574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.322 qpair failed and we were unable to recover it. 00:33:22.322 [2024-04-17 10:29:55.565327] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.322 [2024-04-17 10:29:55.565418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.322 [2024-04-17 10:29:55.565439] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.322 [2024-04-17 10:29:55.565449] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.322 [2024-04-17 10:29:55.565458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.322 [2024-04-17 10:29:55.565476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.322 qpair failed and we were unable to recover it. 00:33:22.322 [2024-04-17 10:29:55.575374] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.322 [2024-04-17 10:29:55.575496] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.322 [2024-04-17 10:29:55.575517] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.322 [2024-04-17 10:29:55.575527] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.322 [2024-04-17 10:29:55.575536] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.322 [2024-04-17 10:29:55.575555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.322 qpair failed and we were unable to recover it. 00:33:22.322 [2024-04-17 10:29:55.585461] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.322 [2024-04-17 10:29:55.585549] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.322 [2024-04-17 10:29:55.585569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.322 [2024-04-17 10:29:55.585579] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.322 [2024-04-17 10:29:55.585588] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.322 [2024-04-17 10:29:55.585607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.322 qpair failed and we were unable to recover it. 00:33:22.322 [2024-04-17 10:29:55.595493] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.322 [2024-04-17 10:29:55.595621] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.322 [2024-04-17 10:29:55.595651] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.322 [2024-04-17 10:29:55.595663] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.322 [2024-04-17 10:29:55.595672] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.322 [2024-04-17 10:29:55.595691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.322 qpair failed and we were unable to recover it. 00:33:22.322 [2024-04-17 10:29:55.605529] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.322 [2024-04-17 10:29:55.605619] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.322 [2024-04-17 10:29:55.605641] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.322 [2024-04-17 10:29:55.605657] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.322 [2024-04-17 10:29:55.605666] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.322 [2024-04-17 10:29:55.605685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.322 qpair failed and we were unable to recover it. 00:33:22.322 [2024-04-17 10:29:55.615530] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.322 [2024-04-17 10:29:55.615660] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.322 [2024-04-17 10:29:55.615681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.322 [2024-04-17 10:29:55.615691] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.323 [2024-04-17 10:29:55.615700] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.323 [2024-04-17 10:29:55.615719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.323 qpair failed and we were unable to recover it. 00:33:22.323 [2024-04-17 10:29:55.625652] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.323 [2024-04-17 10:29:55.625744] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.323 [2024-04-17 10:29:55.625765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.323 [2024-04-17 10:29:55.625775] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.323 [2024-04-17 10:29:55.625784] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.323 [2024-04-17 10:29:55.625802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.323 qpair failed and we were unable to recover it. 00:33:22.323 [2024-04-17 10:29:55.635628] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.323 [2024-04-17 10:29:55.635756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.323 [2024-04-17 10:29:55.635777] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.323 [2024-04-17 10:29:55.635786] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.323 [2024-04-17 10:29:55.635795] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.323 [2024-04-17 10:29:55.635820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.323 qpair failed and we were unable to recover it. 00:33:22.323 [2024-04-17 10:29:55.645627] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.323 [2024-04-17 10:29:55.645756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.323 [2024-04-17 10:29:55.645777] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.323 [2024-04-17 10:29:55.645787] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.323 [2024-04-17 10:29:55.645798] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.323 [2024-04-17 10:29:55.645818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.323 qpair failed and we were unable to recover it. 00:33:22.583 [2024-04-17 10:29:55.655673] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.583 [2024-04-17 10:29:55.655792] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.583 [2024-04-17 10:29:55.655812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.583 [2024-04-17 10:29:55.655822] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.583 [2024-04-17 10:29:55.655831] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.583 [2024-04-17 10:29:55.655852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.583 qpair failed and we were unable to recover it. 00:33:22.583 [2024-04-17 10:29:55.665735] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.583 [2024-04-17 10:29:55.665872] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.583 [2024-04-17 10:29:55.665892] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.583 [2024-04-17 10:29:55.665902] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.583 [2024-04-17 10:29:55.665911] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.583 [2024-04-17 10:29:55.665931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.583 qpair failed and we were unable to recover it. 00:33:22.583 [2024-04-17 10:29:55.675660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.583 [2024-04-17 10:29:55.675757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.583 [2024-04-17 10:29:55.675777] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.583 [2024-04-17 10:29:55.675787] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.583 [2024-04-17 10:29:55.675796] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.583 [2024-04-17 10:29:55.675816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.583 qpair failed and we were unable to recover it. 00:33:22.583 [2024-04-17 10:29:55.685686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.583 [2024-04-17 10:29:55.685782] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.583 [2024-04-17 10:29:55.685808] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.583 [2024-04-17 10:29:55.685818] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.583 [2024-04-17 10:29:55.685826] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.583 [2024-04-17 10:29:55.685845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.583 qpair failed and we were unable to recover it. 00:33:22.583 [2024-04-17 10:29:55.695815] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.583 [2024-04-17 10:29:55.695897] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.583 [2024-04-17 10:29:55.695917] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.583 [2024-04-17 10:29:55.695927] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.583 [2024-04-17 10:29:55.695936] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.583 [2024-04-17 10:29:55.695955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.583 qpair failed and we were unable to recover it. 00:33:22.583 [2024-04-17 10:29:55.705835] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.583 [2024-04-17 10:29:55.705971] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.583 [2024-04-17 10:29:55.705991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.583 [2024-04-17 10:29:55.706001] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.583 [2024-04-17 10:29:55.706009] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.583 [2024-04-17 10:29:55.706029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.583 qpair failed and we were unable to recover it. 00:33:22.583 [2024-04-17 10:29:55.715867] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.583 [2024-04-17 10:29:55.715964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.583 [2024-04-17 10:29:55.715985] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.583 [2024-04-17 10:29:55.715995] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.583 [2024-04-17 10:29:55.716004] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.583 [2024-04-17 10:29:55.716023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.583 qpair failed and we were unable to recover it. 00:33:22.583 [2024-04-17 10:29:55.725870] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.583 [2024-04-17 10:29:55.726008] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.583 [2024-04-17 10:29:55.726029] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.583 [2024-04-17 10:29:55.726039] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.583 [2024-04-17 10:29:55.726048] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.583 [2024-04-17 10:29:55.726073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.583 qpair failed and we were unable to recover it. 00:33:22.583 [2024-04-17 10:29:55.735883] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.583 [2024-04-17 10:29:55.735972] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.583 [2024-04-17 10:29:55.735993] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.583 [2024-04-17 10:29:55.736003] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.583 [2024-04-17 10:29:55.736011] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.583 [2024-04-17 10:29:55.736030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.583 qpair failed and we were unable to recover it. 00:33:22.583 [2024-04-17 10:29:55.745883] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.583 [2024-04-17 10:29:55.746004] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.583 [2024-04-17 10:29:55.746024] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.583 [2024-04-17 10:29:55.746034] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.583 [2024-04-17 10:29:55.746042] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.583 [2024-04-17 10:29:55.746061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.583 qpair failed and we were unable to recover it. 00:33:22.583 [2024-04-17 10:29:55.755890] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.583 [2024-04-17 10:29:55.755981] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.583 [2024-04-17 10:29:55.756002] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.583 [2024-04-17 10:29:55.756012] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.583 [2024-04-17 10:29:55.756021] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.583 [2024-04-17 10:29:55.756039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.583 qpair failed and we were unable to recover it. 00:33:22.583 [2024-04-17 10:29:55.765975] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.583 [2024-04-17 10:29:55.766062] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.583 [2024-04-17 10:29:55.766084] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.583 [2024-04-17 10:29:55.766095] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.583 [2024-04-17 10:29:55.766105] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.583 [2024-04-17 10:29:55.766125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.583 qpair failed and we were unable to recover it. 00:33:22.583 [2024-04-17 10:29:55.775999] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.583 [2024-04-17 10:29:55.776086] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.583 [2024-04-17 10:29:55.776110] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.583 [2024-04-17 10:29:55.776121] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.583 [2024-04-17 10:29:55.776129] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.584 [2024-04-17 10:29:55.776148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.584 qpair failed and we were unable to recover it. 00:33:22.584 [2024-04-17 10:29:55.786061] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.584 [2024-04-17 10:29:55.786198] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.584 [2024-04-17 10:29:55.786219] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.584 [2024-04-17 10:29:55.786229] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.584 [2024-04-17 10:29:55.786238] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.584 [2024-04-17 10:29:55.786256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.584 qpair failed and we were unable to recover it. 00:33:22.584 [2024-04-17 10:29:55.796011] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.584 [2024-04-17 10:29:55.796128] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.584 [2024-04-17 10:29:55.796149] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.584 [2024-04-17 10:29:55.796160] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.584 [2024-04-17 10:29:55.796169] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.584 [2024-04-17 10:29:55.796188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.584 qpair failed and we were unable to recover it. 00:33:22.584 [2024-04-17 10:29:55.806074] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.584 [2024-04-17 10:29:55.806163] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.584 [2024-04-17 10:29:55.806185] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.584 [2024-04-17 10:29:55.806195] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.584 [2024-04-17 10:29:55.806204] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.584 [2024-04-17 10:29:55.806223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.584 qpair failed and we were unable to recover it. 00:33:22.584 [2024-04-17 10:29:55.816149] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.584 [2024-04-17 10:29:55.816229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.584 [2024-04-17 10:29:55.816249] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.584 [2024-04-17 10:29:55.816259] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.584 [2024-04-17 10:29:55.816268] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.584 [2024-04-17 10:29:55.816290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.584 qpair failed and we were unable to recover it. 00:33:22.584 [2024-04-17 10:29:55.826161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.584 [2024-04-17 10:29:55.826276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.584 [2024-04-17 10:29:55.826297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.584 [2024-04-17 10:29:55.826307] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.584 [2024-04-17 10:29:55.826316] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.584 [2024-04-17 10:29:55.826335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.584 qpair failed and we were unable to recover it. 00:33:22.584 [2024-04-17 10:29:55.836136] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.584 [2024-04-17 10:29:55.836230] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.584 [2024-04-17 10:29:55.836250] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.584 [2024-04-17 10:29:55.836260] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.584 [2024-04-17 10:29:55.836269] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.584 [2024-04-17 10:29:55.836288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.584 qpair failed and we were unable to recover it. 00:33:22.584 [2024-04-17 10:29:55.846250] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.584 [2024-04-17 10:29:55.846339] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.584 [2024-04-17 10:29:55.846361] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.584 [2024-04-17 10:29:55.846370] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.584 [2024-04-17 10:29:55.846379] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.584 [2024-04-17 10:29:55.846398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.584 qpair failed and we were unable to recover it. 00:33:22.584 [2024-04-17 10:29:55.856244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.584 [2024-04-17 10:29:55.856337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.584 [2024-04-17 10:29:55.856357] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.584 [2024-04-17 10:29:55.856366] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.584 [2024-04-17 10:29:55.856375] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.584 [2024-04-17 10:29:55.856394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.584 qpair failed and we were unable to recover it. 00:33:22.584 [2024-04-17 10:29:55.866295] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.584 [2024-04-17 10:29:55.866386] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.584 [2024-04-17 10:29:55.866411] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.584 [2024-04-17 10:29:55.866420] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.584 [2024-04-17 10:29:55.866429] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.584 [2024-04-17 10:29:55.866448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.584 qpair failed and we were unable to recover it. 00:33:22.584 [2024-04-17 10:29:55.876268] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.584 [2024-04-17 10:29:55.876356] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.584 [2024-04-17 10:29:55.876377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.584 [2024-04-17 10:29:55.876387] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.584 [2024-04-17 10:29:55.876395] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.584 [2024-04-17 10:29:55.876414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.584 qpair failed and we were unable to recover it. 00:33:22.584 [2024-04-17 10:29:55.886298] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.584 [2024-04-17 10:29:55.886385] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.584 [2024-04-17 10:29:55.886405] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.584 [2024-04-17 10:29:55.886415] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.584 [2024-04-17 10:29:55.886424] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.584 [2024-04-17 10:29:55.886443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.584 qpair failed and we were unable to recover it. 00:33:22.584 [2024-04-17 10:29:55.896385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.584 [2024-04-17 10:29:55.896474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.584 [2024-04-17 10:29:55.896495] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.584 [2024-04-17 10:29:55.896505] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.584 [2024-04-17 10:29:55.896514] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.584 [2024-04-17 10:29:55.896533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.584 qpair failed and we were unable to recover it. 00:33:22.584 [2024-04-17 10:29:55.906446] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.584 [2024-04-17 10:29:55.906530] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.584 [2024-04-17 10:29:55.906551] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.584 [2024-04-17 10:29:55.906562] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.584 [2024-04-17 10:29:55.906571] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.584 [2024-04-17 10:29:55.906593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.584 qpair failed and we were unable to recover it. 00:33:22.844 [2024-04-17 10:29:55.916398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.844 [2024-04-17 10:29:55.916487] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.844 [2024-04-17 10:29:55.916508] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.844 [2024-04-17 10:29:55.916519] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.844 [2024-04-17 10:29:55.916529] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.844 [2024-04-17 10:29:55.916549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.844 qpair failed and we were unable to recover it. 00:33:22.844 [2024-04-17 10:29:55.926494] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.844 [2024-04-17 10:29:55.926578] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.844 [2024-04-17 10:29:55.926599] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.844 [2024-04-17 10:29:55.926609] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.844 [2024-04-17 10:29:55.926618] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.844 [2024-04-17 10:29:55.926636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.844 qpair failed and we were unable to recover it. 00:33:22.844 [2024-04-17 10:29:55.936548] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.844 [2024-04-17 10:29:55.936639] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.844 [2024-04-17 10:29:55.936665] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.844 [2024-04-17 10:29:55.936675] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.844 [2024-04-17 10:29:55.936684] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.844 [2024-04-17 10:29:55.936704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.844 qpair failed and we were unable to recover it. 00:33:22.844 [2024-04-17 10:29:55.946533] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.844 [2024-04-17 10:29:55.946624] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.844 [2024-04-17 10:29:55.946650] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.844 [2024-04-17 10:29:55.946661] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.844 [2024-04-17 10:29:55.946669] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.844 [2024-04-17 10:29:55.946688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.844 qpair failed and we were unable to recover it. 00:33:22.844 [2024-04-17 10:29:55.956585] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.844 [2024-04-17 10:29:55.956672] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.844 [2024-04-17 10:29:55.956696] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.844 [2024-04-17 10:29:55.956706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.844 [2024-04-17 10:29:55.956715] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.844 [2024-04-17 10:29:55.956734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.844 qpair failed and we were unable to recover it. 00:33:22.844 [2024-04-17 10:29:55.966660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.844 [2024-04-17 10:29:55.966755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.844 [2024-04-17 10:29:55.966776] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.844 [2024-04-17 10:29:55.966786] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.844 [2024-04-17 10:29:55.966795] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.844 [2024-04-17 10:29:55.966814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.844 qpair failed and we were unable to recover it. 00:33:22.844 [2024-04-17 10:29:55.976636] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.844 [2024-04-17 10:29:55.976724] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.844 [2024-04-17 10:29:55.976744] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.844 [2024-04-17 10:29:55.976754] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.844 [2024-04-17 10:29:55.976763] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.844 [2024-04-17 10:29:55.976782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.844 qpair failed and we were unable to recover it. 00:33:22.844 [2024-04-17 10:29:55.986699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.844 [2024-04-17 10:29:55.986820] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.844 [2024-04-17 10:29:55.986842] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.844 [2024-04-17 10:29:55.986852] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.844 [2024-04-17 10:29:55.986860] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.844 [2024-04-17 10:29:55.986880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.844 qpair failed and we were unable to recover it. 00:33:22.844 [2024-04-17 10:29:55.996718] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.844 [2024-04-17 10:29:55.996807] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.844 [2024-04-17 10:29:55.996829] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.844 [2024-04-17 10:29:55.996840] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.844 [2024-04-17 10:29:55.996854] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.844 [2024-04-17 10:29:55.996874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.844 qpair failed and we were unable to recover it. 00:33:22.844 [2024-04-17 10:29:56.006772] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.844 [2024-04-17 10:29:56.006877] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.844 [2024-04-17 10:29:56.006898] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.844 [2024-04-17 10:29:56.006908] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.844 [2024-04-17 10:29:56.006917] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.844 [2024-04-17 10:29:56.006936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.844 qpair failed and we were unable to recover it. 00:33:22.844 [2024-04-17 10:29:56.016811] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.844 [2024-04-17 10:29:56.016903] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.844 [2024-04-17 10:29:56.016923] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.844 [2024-04-17 10:29:56.016933] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.844 [2024-04-17 10:29:56.016941] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.844 [2024-04-17 10:29:56.016960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.844 qpair failed and we were unable to recover it. 00:33:22.844 [2024-04-17 10:29:56.026800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.844 [2024-04-17 10:29:56.026890] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.844 [2024-04-17 10:29:56.026911] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.844 [2024-04-17 10:29:56.026921] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.844 [2024-04-17 10:29:56.026930] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.845 [2024-04-17 10:29:56.026949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.845 qpair failed and we were unable to recover it. 00:33:22.845 [2024-04-17 10:29:56.036866] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.845 [2024-04-17 10:29:56.036967] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.845 [2024-04-17 10:29:56.036988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.845 [2024-04-17 10:29:56.036998] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.845 [2024-04-17 10:29:56.037006] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.845 [2024-04-17 10:29:56.037025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.845 qpair failed and we were unable to recover it. 00:33:22.845 [2024-04-17 10:29:56.046841] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.845 [2024-04-17 10:29:56.046929] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.845 [2024-04-17 10:29:56.046954] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.845 [2024-04-17 10:29:56.046963] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.845 [2024-04-17 10:29:56.046972] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.845 [2024-04-17 10:29:56.046992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.845 qpair failed and we were unable to recover it. 00:33:22.845 [2024-04-17 10:29:56.056873] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.845 [2024-04-17 10:29:56.056956] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.845 [2024-04-17 10:29:56.056977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.845 [2024-04-17 10:29:56.056986] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.845 [2024-04-17 10:29:56.056995] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.845 [2024-04-17 10:29:56.057014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.845 qpair failed and we were unable to recover it. 00:33:22.845 [2024-04-17 10:29:56.066919] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.845 [2024-04-17 10:29:56.067038] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.845 [2024-04-17 10:29:56.067058] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.845 [2024-04-17 10:29:56.067069] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.845 [2024-04-17 10:29:56.067077] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.845 [2024-04-17 10:29:56.067096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.845 qpair failed and we were unable to recover it. 00:33:22.845 [2024-04-17 10:29:56.076948] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.845 [2024-04-17 10:29:56.077040] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.845 [2024-04-17 10:29:56.077061] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.845 [2024-04-17 10:29:56.077071] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.845 [2024-04-17 10:29:56.077080] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.845 [2024-04-17 10:29:56.077099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.845 qpair failed and we were unable to recover it. 00:33:22.845 [2024-04-17 10:29:56.086955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.845 [2024-04-17 10:29:56.087042] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.845 [2024-04-17 10:29:56.087063] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.845 [2024-04-17 10:29:56.087073] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.845 [2024-04-17 10:29:56.087086] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.845 [2024-04-17 10:29:56.087104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.845 qpair failed and we were unable to recover it. 00:33:22.845 [2024-04-17 10:29:56.097003] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.845 [2024-04-17 10:29:56.097097] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.845 [2024-04-17 10:29:56.097117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.845 [2024-04-17 10:29:56.097127] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.845 [2024-04-17 10:29:56.097136] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.845 [2024-04-17 10:29:56.097155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.845 qpair failed and we were unable to recover it. 00:33:22.845 [2024-04-17 10:29:56.107043] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.845 [2024-04-17 10:29:56.107132] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.845 [2024-04-17 10:29:56.107152] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.845 [2024-04-17 10:29:56.107162] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.845 [2024-04-17 10:29:56.107171] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.845 [2024-04-17 10:29:56.107189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.845 qpair failed and we were unable to recover it. 00:33:22.845 [2024-04-17 10:29:56.117029] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.845 [2024-04-17 10:29:56.117116] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.845 [2024-04-17 10:29:56.117136] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.845 [2024-04-17 10:29:56.117146] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.845 [2024-04-17 10:29:56.117155] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.845 [2024-04-17 10:29:56.117175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.845 qpair failed and we were unable to recover it. 00:33:22.845 [2024-04-17 10:29:56.127099] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.845 [2024-04-17 10:29:56.127187] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.845 [2024-04-17 10:29:56.127209] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.845 [2024-04-17 10:29:56.127219] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.845 [2024-04-17 10:29:56.127227] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.845 [2024-04-17 10:29:56.127246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.845 qpair failed and we were unable to recover it. 00:33:22.845 [2024-04-17 10:29:56.137164] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.845 [2024-04-17 10:29:56.137260] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.845 [2024-04-17 10:29:56.137281] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.845 [2024-04-17 10:29:56.137291] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.845 [2024-04-17 10:29:56.137299] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.845 [2024-04-17 10:29:56.137318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.845 qpair failed and we were unable to recover it. 00:33:22.845 [2024-04-17 10:29:56.147176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.845 [2024-04-17 10:29:56.147267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.845 [2024-04-17 10:29:56.147288] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.845 [2024-04-17 10:29:56.147298] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.845 [2024-04-17 10:29:56.147307] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.845 [2024-04-17 10:29:56.147325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.845 qpair failed and we were unable to recover it. 00:33:22.845 [2024-04-17 10:29:56.157174] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.845 [2024-04-17 10:29:56.157267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.845 [2024-04-17 10:29:56.157288] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.845 [2024-04-17 10:29:56.157298] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.845 [2024-04-17 10:29:56.157307] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.845 [2024-04-17 10:29:56.157327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.845 qpair failed and we were unable to recover it. 00:33:22.845 [2024-04-17 10:29:56.167271] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:22.846 [2024-04-17 10:29:56.167358] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:22.846 [2024-04-17 10:29:56.167378] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:22.846 [2024-04-17 10:29:56.167388] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:22.846 [2024-04-17 10:29:56.167397] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:22.846 [2024-04-17 10:29:56.167416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:22.846 qpair failed and we were unable to recover it. 00:33:23.105 [2024-04-17 10:29:56.177267] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.105 [2024-04-17 10:29:56.177394] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.105 [2024-04-17 10:29:56.177415] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.105 [2024-04-17 10:29:56.177425] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.105 [2024-04-17 10:29:56.177438] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.105 [2024-04-17 10:29:56.177458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.105 qpair failed and we were unable to recover it. 00:33:23.105 [2024-04-17 10:29:56.187317] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.105 [2024-04-17 10:29:56.187400] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.105 [2024-04-17 10:29:56.187421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.105 [2024-04-17 10:29:56.187431] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.105 [2024-04-17 10:29:56.187439] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.105 [2024-04-17 10:29:56.187458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.105 qpair failed and we were unable to recover it. 00:33:23.105 [2024-04-17 10:29:56.197297] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.105 [2024-04-17 10:29:56.197391] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.105 [2024-04-17 10:29:56.197412] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.105 [2024-04-17 10:29:56.197422] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.105 [2024-04-17 10:29:56.197431] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.105 [2024-04-17 10:29:56.197449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.105 qpair failed and we were unable to recover it. 00:33:23.105 [2024-04-17 10:29:56.207259] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.105 [2024-04-17 10:29:56.207351] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.105 [2024-04-17 10:29:56.207372] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.105 [2024-04-17 10:29:56.207382] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.105 [2024-04-17 10:29:56.207391] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.105 [2024-04-17 10:29:56.207409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.105 qpair failed and we were unable to recover it. 00:33:23.105 [2024-04-17 10:29:56.217367] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.105 [2024-04-17 10:29:56.217453] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.105 [2024-04-17 10:29:56.217475] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.105 [2024-04-17 10:29:56.217485] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.105 [2024-04-17 10:29:56.217495] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.105 [2024-04-17 10:29:56.217514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.105 qpair failed and we were unable to recover it. 00:33:23.105 [2024-04-17 10:29:56.227400] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.105 [2024-04-17 10:29:56.227493] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.105 [2024-04-17 10:29:56.227515] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.105 [2024-04-17 10:29:56.227525] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.105 [2024-04-17 10:29:56.227534] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.105 [2024-04-17 10:29:56.227553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.105 qpair failed and we were unable to recover it. 00:33:23.105 [2024-04-17 10:29:56.237426] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.105 [2024-04-17 10:29:56.237512] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.105 [2024-04-17 10:29:56.237533] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.105 [2024-04-17 10:29:56.237543] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.105 [2024-04-17 10:29:56.237552] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.105 [2024-04-17 10:29:56.237571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.105 qpair failed and we were unable to recover it. 00:33:23.105 [2024-04-17 10:29:56.247395] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.105 [2024-04-17 10:29:56.247514] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.105 [2024-04-17 10:29:56.247534] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.105 [2024-04-17 10:29:56.247544] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.105 [2024-04-17 10:29:56.247553] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.105 [2024-04-17 10:29:56.247571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.105 qpair failed and we were unable to recover it. 00:33:23.105 [2024-04-17 10:29:56.257476] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.105 [2024-04-17 10:29:56.257555] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.105 [2024-04-17 10:29:56.257576] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.105 [2024-04-17 10:29:56.257587] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.105 [2024-04-17 10:29:56.257595] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.105 [2024-04-17 10:29:56.257614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.105 qpair failed and we were unable to recover it. 00:33:23.105 [2024-04-17 10:29:56.267520] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.105 [2024-04-17 10:29:56.267642] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.105 [2024-04-17 10:29:56.267670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.105 [2024-04-17 10:29:56.267681] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.105 [2024-04-17 10:29:56.267694] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.105 [2024-04-17 10:29:56.267713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.105 qpair failed and we were unable to recover it. 00:33:23.105 [2024-04-17 10:29:56.277566] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.105 [2024-04-17 10:29:56.277659] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.105 [2024-04-17 10:29:56.277681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.105 [2024-04-17 10:29:56.277691] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.105 [2024-04-17 10:29:56.277700] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.105 [2024-04-17 10:29:56.277719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.105 qpair failed and we were unable to recover it. 00:33:23.105 [2024-04-17 10:29:56.287610] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.105 [2024-04-17 10:29:56.287718] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.105 [2024-04-17 10:29:56.287739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.105 [2024-04-17 10:29:56.287749] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.105 [2024-04-17 10:29:56.287758] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.105 [2024-04-17 10:29:56.287776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.105 qpair failed and we were unable to recover it. 00:33:23.105 [2024-04-17 10:29:56.297597] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.105 [2024-04-17 10:29:56.297691] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.105 [2024-04-17 10:29:56.297713] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.105 [2024-04-17 10:29:56.297723] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.105 [2024-04-17 10:29:56.297731] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.105 [2024-04-17 10:29:56.297751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.105 qpair failed and we were unable to recover it. 00:33:23.106 [2024-04-17 10:29:56.307684] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.106 [2024-04-17 10:29:56.307775] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.106 [2024-04-17 10:29:56.307796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.106 [2024-04-17 10:29:56.307805] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.106 [2024-04-17 10:29:56.307814] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.106 [2024-04-17 10:29:56.307833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.106 qpair failed and we were unable to recover it. 00:33:23.106 [2024-04-17 10:29:56.317596] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.106 [2024-04-17 10:29:56.317697] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.106 [2024-04-17 10:29:56.317718] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.106 [2024-04-17 10:29:56.317729] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.106 [2024-04-17 10:29:56.317737] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.106 [2024-04-17 10:29:56.317756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.106 qpair failed and we were unable to recover it. 00:33:23.106 [2024-04-17 10:29:56.327694] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.106 [2024-04-17 10:29:56.327783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.106 [2024-04-17 10:29:56.327803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.106 [2024-04-17 10:29:56.327813] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.106 [2024-04-17 10:29:56.327822] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.106 [2024-04-17 10:29:56.327840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.106 qpair failed and we were unable to recover it. 00:33:23.106 [2024-04-17 10:29:56.337747] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.106 [2024-04-17 10:29:56.337839] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.106 [2024-04-17 10:29:56.337859] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.106 [2024-04-17 10:29:56.337870] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.106 [2024-04-17 10:29:56.337879] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.106 [2024-04-17 10:29:56.337897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.106 qpair failed and we were unable to recover it. 00:33:23.106 [2024-04-17 10:29:56.347784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.106 [2024-04-17 10:29:56.347872] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.106 [2024-04-17 10:29:56.347898] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.106 [2024-04-17 10:29:56.347909] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.106 [2024-04-17 10:29:56.347919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.106 [2024-04-17 10:29:56.347939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.106 qpair failed and we were unable to recover it. 00:33:23.106 [2024-04-17 10:29:56.357860] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.106 [2024-04-17 10:29:56.357946] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.106 [2024-04-17 10:29:56.357967] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.106 [2024-04-17 10:29:56.357978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.106 [2024-04-17 10:29:56.357991] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.106 [2024-04-17 10:29:56.358011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.106 qpair failed and we were unable to recover it. 00:33:23.106 [2024-04-17 10:29:56.367816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.106 [2024-04-17 10:29:56.367912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.106 [2024-04-17 10:29:56.367934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.106 [2024-04-17 10:29:56.367944] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.106 [2024-04-17 10:29:56.367953] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.106 [2024-04-17 10:29:56.367972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.106 qpair failed and we were unable to recover it. 00:33:23.106 [2024-04-17 10:29:56.377878] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.106 [2024-04-17 10:29:56.377963] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.106 [2024-04-17 10:29:56.377984] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.106 [2024-04-17 10:29:56.377994] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.106 [2024-04-17 10:29:56.378003] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.106 [2024-04-17 10:29:56.378022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.106 qpair failed and we were unable to recover it. 00:33:23.106 [2024-04-17 10:29:56.387907] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.106 [2024-04-17 10:29:56.387995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.106 [2024-04-17 10:29:56.388015] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.106 [2024-04-17 10:29:56.388026] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.106 [2024-04-17 10:29:56.388035] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.106 [2024-04-17 10:29:56.388053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.106 qpair failed and we were unable to recover it. 00:33:23.106 [2024-04-17 10:29:56.397928] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.106 [2024-04-17 10:29:56.398017] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.106 [2024-04-17 10:29:56.398040] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.106 [2024-04-17 10:29:56.398050] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.106 [2024-04-17 10:29:56.398060] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.106 [2024-04-17 10:29:56.398080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.106 qpair failed and we were unable to recover it. 00:33:23.106 [2024-04-17 10:29:56.407947] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.106 [2024-04-17 10:29:56.408036] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.106 [2024-04-17 10:29:56.408056] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.106 [2024-04-17 10:29:56.408066] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.106 [2024-04-17 10:29:56.408076] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.106 [2024-04-17 10:29:56.408094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.106 qpair failed and we were unable to recover it. 00:33:23.106 [2024-04-17 10:29:56.418007] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.106 [2024-04-17 10:29:56.418093] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.106 [2024-04-17 10:29:56.418114] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.106 [2024-04-17 10:29:56.418124] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.106 [2024-04-17 10:29:56.418133] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.106 [2024-04-17 10:29:56.418152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.106 qpair failed and we were unable to recover it. 00:33:23.106 [2024-04-17 10:29:56.428037] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.106 [2024-04-17 10:29:56.428125] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.106 [2024-04-17 10:29:56.428145] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.106 [2024-04-17 10:29:56.428156] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.106 [2024-04-17 10:29:56.428165] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.106 [2024-04-17 10:29:56.428184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.106 qpair failed and we were unable to recover it. 00:33:23.366 [2024-04-17 10:29:56.438040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.366 [2024-04-17 10:29:56.438126] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.366 [2024-04-17 10:29:56.438148] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.366 [2024-04-17 10:29:56.438158] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.366 [2024-04-17 10:29:56.438167] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.366 [2024-04-17 10:29:56.438185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-04-17 10:29:56.448113] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.366 [2024-04-17 10:29:56.448210] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.366 [2024-04-17 10:29:56.448231] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.366 [2024-04-17 10:29:56.448241] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.366 [2024-04-17 10:29:56.448253] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.366 [2024-04-17 10:29:56.448272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-04-17 10:29:56.458118] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.366 [2024-04-17 10:29:56.458223] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.366 [2024-04-17 10:29:56.458243] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.366 [2024-04-17 10:29:56.458253] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.366 [2024-04-17 10:29:56.458262] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.366 [2024-04-17 10:29:56.458280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-04-17 10:29:56.468086] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.366 [2024-04-17 10:29:56.468174] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.366 [2024-04-17 10:29:56.468195] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.366 [2024-04-17 10:29:56.468205] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.366 [2024-04-17 10:29:56.468214] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.366 [2024-04-17 10:29:56.468233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-04-17 10:29:56.478184] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.366 [2024-04-17 10:29:56.478273] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.366 [2024-04-17 10:29:56.478294] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.366 [2024-04-17 10:29:56.478304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.366 [2024-04-17 10:29:56.478313] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.366 [2024-04-17 10:29:56.478332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-04-17 10:29:56.488211] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.366 [2024-04-17 10:29:56.488312] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.366 [2024-04-17 10:29:56.488332] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.366 [2024-04-17 10:29:56.488342] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.366 [2024-04-17 10:29:56.488351] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.366 [2024-04-17 10:29:56.488370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-04-17 10:29:56.498260] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.366 [2024-04-17 10:29:56.498350] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.366 [2024-04-17 10:29:56.498371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.366 [2024-04-17 10:29:56.498381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.366 [2024-04-17 10:29:56.498389] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.366 [2024-04-17 10:29:56.498408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-04-17 10:29:56.508291] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.366 [2024-04-17 10:29:56.508432] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.366 [2024-04-17 10:29:56.508453] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.366 [2024-04-17 10:29:56.508463] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.366 [2024-04-17 10:29:56.508472] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.366 [2024-04-17 10:29:56.508490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-04-17 10:29:56.518314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.366 [2024-04-17 10:29:56.518408] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.366 [2024-04-17 10:29:56.518429] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.366 [2024-04-17 10:29:56.518439] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.366 [2024-04-17 10:29:56.518448] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.366 [2024-04-17 10:29:56.518467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-04-17 10:29:56.528372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.366 [2024-04-17 10:29:56.528459] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.366 [2024-04-17 10:29:56.528480] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.366 [2024-04-17 10:29:56.528490] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.366 [2024-04-17 10:29:56.528499] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.366 [2024-04-17 10:29:56.528517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-04-17 10:29:56.538393] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.366 [2024-04-17 10:29:56.538483] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.366 [2024-04-17 10:29:56.538504] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.366 [2024-04-17 10:29:56.538518] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.366 [2024-04-17 10:29:56.538527] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.366 [2024-04-17 10:29:56.538546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-04-17 10:29:56.548407] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.366 [2024-04-17 10:29:56.548530] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.366 [2024-04-17 10:29:56.548551] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.366 [2024-04-17 10:29:56.548561] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.366 [2024-04-17 10:29:56.548569] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.366 [2024-04-17 10:29:56.548588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-04-17 10:29:56.558403] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.366 [2024-04-17 10:29:56.558494] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.366 [2024-04-17 10:29:56.558515] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.366 [2024-04-17 10:29:56.558525] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.366 [2024-04-17 10:29:56.558534] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.366 [2024-04-17 10:29:56.558553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-04-17 10:29:56.568479] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.366 [2024-04-17 10:29:56.568609] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.366 [2024-04-17 10:29:56.568630] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.366 [2024-04-17 10:29:56.568640] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.366 [2024-04-17 10:29:56.568662] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.366 [2024-04-17 10:29:56.568682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-04-17 10:29:56.578542] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.366 [2024-04-17 10:29:56.578634] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.366 [2024-04-17 10:29:56.578661] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.366 [2024-04-17 10:29:56.578671] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.366 [2024-04-17 10:29:56.578680] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.366 [2024-04-17 10:29:56.578699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-04-17 10:29:56.588513] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.366 [2024-04-17 10:29:56.588601] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.366 [2024-04-17 10:29:56.588622] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.366 [2024-04-17 10:29:56.588632] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.366 [2024-04-17 10:29:56.588641] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.366 [2024-04-17 10:29:56.588664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-04-17 10:29:56.598572] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.366 [2024-04-17 10:29:56.598685] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.366 [2024-04-17 10:29:56.598706] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.366 [2024-04-17 10:29:56.598716] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.366 [2024-04-17 10:29:56.598725] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.366 [2024-04-17 10:29:56.598744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-04-17 10:29:56.608591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.366 [2024-04-17 10:29:56.608682] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.366 [2024-04-17 10:29:56.608702] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.366 [2024-04-17 10:29:56.608712] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.366 [2024-04-17 10:29:56.608721] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.366 [2024-04-17 10:29:56.608739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-04-17 10:29:56.618544] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.366 [2024-04-17 10:29:56.618628] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.366 [2024-04-17 10:29:56.618653] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.366 [2024-04-17 10:29:56.618664] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.366 [2024-04-17 10:29:56.618673] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.366 [2024-04-17 10:29:56.618692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-04-17 10:29:56.628672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.366 [2024-04-17 10:29:56.628771] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.366 [2024-04-17 10:29:56.628793] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.366 [2024-04-17 10:29:56.628807] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.366 [2024-04-17 10:29:56.628816] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.366 [2024-04-17 10:29:56.628836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-04-17 10:29:56.638686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.366 [2024-04-17 10:29:56.638781] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.366 [2024-04-17 10:29:56.638802] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.366 [2024-04-17 10:29:56.638812] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.366 [2024-04-17 10:29:56.638822] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.366 [2024-04-17 10:29:56.638841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-04-17 10:29:56.648632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.366 [2024-04-17 10:29:56.648755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.366 [2024-04-17 10:29:56.648776] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.366 [2024-04-17 10:29:56.648786] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.366 [2024-04-17 10:29:56.648796] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.366 [2024-04-17 10:29:56.648814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.366 qpair failed and we were unable to recover it. 00:33:23.366 [2024-04-17 10:29:56.658726] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.366 [2024-04-17 10:29:56.658829] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.366 [2024-04-17 10:29:56.658850] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.367 [2024-04-17 10:29:56.658860] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.367 [2024-04-17 10:29:56.658869] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.367 [2024-04-17 10:29:56.658888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-04-17 10:29:56.668805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.367 [2024-04-17 10:29:56.668896] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.367 [2024-04-17 10:29:56.668918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.367 [2024-04-17 10:29:56.668929] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.367 [2024-04-17 10:29:56.668939] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.367 [2024-04-17 10:29:56.668959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-04-17 10:29:56.678781] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.367 [2024-04-17 10:29:56.678873] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.367 [2024-04-17 10:29:56.678894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.367 [2024-04-17 10:29:56.678903] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.367 [2024-04-17 10:29:56.678912] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.367 [2024-04-17 10:29:56.678931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.367 [2024-04-17 10:29:56.688842] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.367 [2024-04-17 10:29:56.688923] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.367 [2024-04-17 10:29:56.688944] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.367 [2024-04-17 10:29:56.688954] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.367 [2024-04-17 10:29:56.688963] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.367 [2024-04-17 10:29:56.688981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.367 qpair failed and we were unable to recover it. 00:33:23.627 [2024-04-17 10:29:56.698798] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.627 [2024-04-17 10:29:56.698885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.627 [2024-04-17 10:29:56.698906] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.627 [2024-04-17 10:29:56.698916] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.627 [2024-04-17 10:29:56.698924] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.627 [2024-04-17 10:29:56.698943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.627 qpair failed and we were unable to recover it. 00:33:23.627 [2024-04-17 10:29:56.708895] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.627 [2024-04-17 10:29:56.708984] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.627 [2024-04-17 10:29:56.709004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.627 [2024-04-17 10:29:56.709014] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.627 [2024-04-17 10:29:56.709022] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.627 [2024-04-17 10:29:56.709041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.627 qpair failed and we were unable to recover it. 00:33:23.627 [2024-04-17 10:29:56.718941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.627 [2024-04-17 10:29:56.719034] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.627 [2024-04-17 10:29:56.719054] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.627 [2024-04-17 10:29:56.719069] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.627 [2024-04-17 10:29:56.719077] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.627 [2024-04-17 10:29:56.719096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.627 qpair failed and we were unable to recover it. 00:33:23.627 [2024-04-17 10:29:56.728955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.627 [2024-04-17 10:29:56.729040] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.627 [2024-04-17 10:29:56.729060] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.627 [2024-04-17 10:29:56.729070] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.627 [2024-04-17 10:29:56.729079] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.627 [2024-04-17 10:29:56.729097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.627 qpair failed and we were unable to recover it. 00:33:23.627 [2024-04-17 10:29:56.738998] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.627 [2024-04-17 10:29:56.739087] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.627 [2024-04-17 10:29:56.739108] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.628 [2024-04-17 10:29:56.739118] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.628 [2024-04-17 10:29:56.739126] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.628 [2024-04-17 10:29:56.739145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.628 qpair failed and we were unable to recover it. 00:33:23.628 [2024-04-17 10:29:56.748985] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.628 [2024-04-17 10:29:56.749113] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.628 [2024-04-17 10:29:56.749133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.628 [2024-04-17 10:29:56.749143] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.628 [2024-04-17 10:29:56.749152] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.628 [2024-04-17 10:29:56.749171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.628 qpair failed and we were unable to recover it. 00:33:23.628 [2024-04-17 10:29:56.759040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.628 [2024-04-17 10:29:56.759127] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.628 [2024-04-17 10:29:56.759148] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.628 [2024-04-17 10:29:56.759158] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.628 [2024-04-17 10:29:56.759166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.628 [2024-04-17 10:29:56.759185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.628 qpair failed and we were unable to recover it. 00:33:23.628 [2024-04-17 10:29:56.769067] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.628 [2024-04-17 10:29:56.769154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.628 [2024-04-17 10:29:56.769175] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.628 [2024-04-17 10:29:56.769185] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.628 [2024-04-17 10:29:56.769193] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.628 [2024-04-17 10:29:56.769212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.628 qpair failed and we were unable to recover it. 00:33:23.628 [2024-04-17 10:29:56.779109] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.628 [2024-04-17 10:29:56.779230] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.628 [2024-04-17 10:29:56.779251] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.628 [2024-04-17 10:29:56.779261] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.628 [2024-04-17 10:29:56.779270] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.628 [2024-04-17 10:29:56.779289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.628 qpair failed and we were unable to recover it. 00:33:23.628 [2024-04-17 10:29:56.789161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.628 [2024-04-17 10:29:56.789252] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.628 [2024-04-17 10:29:56.789272] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.628 [2024-04-17 10:29:56.789282] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.628 [2024-04-17 10:29:56.789291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.628 [2024-04-17 10:29:56.789309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.628 qpair failed and we were unable to recover it. 00:33:23.628 [2024-04-17 10:29:56.799175] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.628 [2024-04-17 10:29:56.799259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.628 [2024-04-17 10:29:56.799280] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.628 [2024-04-17 10:29:56.799291] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.628 [2024-04-17 10:29:56.799300] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.628 [2024-04-17 10:29:56.799319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.628 qpair failed and we were unable to recover it. 00:33:23.628 [2024-04-17 10:29:56.809203] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.628 [2024-04-17 10:29:56.809288] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.628 [2024-04-17 10:29:56.809309] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.628 [2024-04-17 10:29:56.809323] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.628 [2024-04-17 10:29:56.809332] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.628 [2024-04-17 10:29:56.809350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.628 qpair failed and we were unable to recover it. 00:33:23.628 [2024-04-17 10:29:56.819228] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.628 [2024-04-17 10:29:56.819318] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.628 [2024-04-17 10:29:56.819339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.628 [2024-04-17 10:29:56.819348] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.628 [2024-04-17 10:29:56.819357] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.628 [2024-04-17 10:29:56.819376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.628 qpair failed and we were unable to recover it. 00:33:23.628 [2024-04-17 10:29:56.829275] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.628 [2024-04-17 10:29:56.829358] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.628 [2024-04-17 10:29:56.829380] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.628 [2024-04-17 10:29:56.829390] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.628 [2024-04-17 10:29:56.829399] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.628 [2024-04-17 10:29:56.829418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.628 qpair failed and we were unable to recover it. 00:33:23.628 [2024-04-17 10:29:56.839343] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.628 [2024-04-17 10:29:56.839438] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.628 [2024-04-17 10:29:56.839460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.628 [2024-04-17 10:29:56.839470] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.628 [2024-04-17 10:29:56.839479] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.628 [2024-04-17 10:29:56.839498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.628 qpair failed and we were unable to recover it. 00:33:23.628 [2024-04-17 10:29:56.849264] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.628 [2024-04-17 10:29:56.849362] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.628 [2024-04-17 10:29:56.849383] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.628 [2024-04-17 10:29:56.849393] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.628 [2024-04-17 10:29:56.849401] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.628 [2024-04-17 10:29:56.849420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.628 qpair failed and we were unable to recover it. 00:33:23.628 [2024-04-17 10:29:56.859370] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.628 [2024-04-17 10:29:56.859469] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.628 [2024-04-17 10:29:56.859492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.628 [2024-04-17 10:29:56.859503] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.628 [2024-04-17 10:29:56.859512] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.628 [2024-04-17 10:29:56.859531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.628 qpair failed and we were unable to recover it. 00:33:23.628 [2024-04-17 10:29:56.869421] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.628 [2024-04-17 10:29:56.869508] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.628 [2024-04-17 10:29:56.869531] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.628 [2024-04-17 10:29:56.869541] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.628 [2024-04-17 10:29:56.869551] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.628 [2024-04-17 10:29:56.869571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.629 qpair failed and we were unable to recover it. 00:33:23.629 [2024-04-17 10:29:56.879421] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.629 [2024-04-17 10:29:56.879509] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.629 [2024-04-17 10:29:56.879532] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.629 [2024-04-17 10:29:56.879544] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.629 [2024-04-17 10:29:56.879554] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.629 [2024-04-17 10:29:56.879573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.629 qpair failed and we were unable to recover it. 00:33:23.629 [2024-04-17 10:29:56.889464] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.629 [2024-04-17 10:29:56.889547] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.629 [2024-04-17 10:29:56.889567] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.629 [2024-04-17 10:29:56.889577] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.629 [2024-04-17 10:29:56.889586] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.629 [2024-04-17 10:29:56.889605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.629 qpair failed and we were unable to recover it. 00:33:23.629 [2024-04-17 10:29:56.899473] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.629 [2024-04-17 10:29:56.899571] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.629 [2024-04-17 10:29:56.899592] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.629 [2024-04-17 10:29:56.899606] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.629 [2024-04-17 10:29:56.899615] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.629 [2024-04-17 10:29:56.899634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.629 qpair failed and we were unable to recover it. 00:33:23.629 [2024-04-17 10:29:56.909531] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.629 [2024-04-17 10:29:56.909630] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.629 [2024-04-17 10:29:56.909656] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.629 [2024-04-17 10:29:56.909666] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.629 [2024-04-17 10:29:56.909676] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.629 [2024-04-17 10:29:56.909695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.629 qpair failed and we were unable to recover it. 00:33:23.629 [2024-04-17 10:29:56.919546] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.629 [2024-04-17 10:29:56.919648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.629 [2024-04-17 10:29:56.919670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.629 [2024-04-17 10:29:56.919680] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.629 [2024-04-17 10:29:56.919689] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.629 [2024-04-17 10:29:56.919708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.629 qpair failed and we were unable to recover it. 00:33:23.629 [2024-04-17 10:29:56.929569] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.629 [2024-04-17 10:29:56.929689] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.629 [2024-04-17 10:29:56.929711] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.629 [2024-04-17 10:29:56.929722] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.629 [2024-04-17 10:29:56.929731] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.629 [2024-04-17 10:29:56.929750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.629 qpair failed and we were unable to recover it. 00:33:23.629 [2024-04-17 10:29:56.939603] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.629 [2024-04-17 10:29:56.939715] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.629 [2024-04-17 10:29:56.939737] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.629 [2024-04-17 10:29:56.939748] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.629 [2024-04-17 10:29:56.939757] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.629 [2024-04-17 10:29:56.939776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.629 qpair failed and we were unable to recover it. 00:33:23.629 [2024-04-17 10:29:56.949710] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.629 [2024-04-17 10:29:56.949828] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.629 [2024-04-17 10:29:56.949851] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.629 [2024-04-17 10:29:56.949861] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.629 [2024-04-17 10:29:56.949870] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.629 [2024-04-17 10:29:56.949891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.629 qpair failed and we were unable to recover it. 00:33:23.890 [2024-04-17 10:29:56.959710] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.890 [2024-04-17 10:29:56.959849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.890 [2024-04-17 10:29:56.959872] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.890 [2024-04-17 10:29:56.959883] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.890 [2024-04-17 10:29:56.959893] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.890 [2024-04-17 10:29:56.959912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.890 qpair failed and we were unable to recover it. 00:33:23.890 [2024-04-17 10:29:56.969628] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.890 [2024-04-17 10:29:56.969726] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.890 [2024-04-17 10:29:56.969748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.890 [2024-04-17 10:29:56.969758] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.890 [2024-04-17 10:29:56.969766] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.890 [2024-04-17 10:29:56.969786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.890 qpair failed and we were unable to recover it. 00:33:23.890 [2024-04-17 10:29:56.979702] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.890 [2024-04-17 10:29:56.979790] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.890 [2024-04-17 10:29:56.979811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.890 [2024-04-17 10:29:56.979821] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.890 [2024-04-17 10:29:56.979829] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.890 [2024-04-17 10:29:56.979848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.890 qpair failed and we were unable to recover it. 00:33:23.890 [2024-04-17 10:29:56.989857] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.890 [2024-04-17 10:29:56.989951] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.890 [2024-04-17 10:29:56.989976] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.890 [2024-04-17 10:29:56.989986] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.890 [2024-04-17 10:29:56.989995] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.890 [2024-04-17 10:29:56.990014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.890 qpair failed and we were unable to recover it. 00:33:23.890 [2024-04-17 10:29:56.999726] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.890 [2024-04-17 10:29:56.999852] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.890 [2024-04-17 10:29:56.999873] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.890 [2024-04-17 10:29:56.999883] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.890 [2024-04-17 10:29:56.999891] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.890 [2024-04-17 10:29:56.999912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.890 qpair failed and we were unable to recover it. 00:33:23.890 [2024-04-17 10:29:57.009813] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.890 [2024-04-17 10:29:57.009899] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.890 [2024-04-17 10:29:57.009919] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.890 [2024-04-17 10:29:57.009930] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.890 [2024-04-17 10:29:57.009939] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.890 [2024-04-17 10:29:57.009957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.890 qpair failed and we were unable to recover it. 00:33:23.890 [2024-04-17 10:29:57.019897] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.890 [2024-04-17 10:29:57.019998] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.890 [2024-04-17 10:29:57.020019] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.890 [2024-04-17 10:29:57.020029] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.890 [2024-04-17 10:29:57.020039] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.890 [2024-04-17 10:29:57.020057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.890 qpair failed and we were unable to recover it. 00:33:23.890 [2024-04-17 10:29:57.029840] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.890 [2024-04-17 10:29:57.029925] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.890 [2024-04-17 10:29:57.029946] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.890 [2024-04-17 10:29:57.029956] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.890 [2024-04-17 10:29:57.029965] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.890 [2024-04-17 10:29:57.029983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.890 qpair failed and we were unable to recover it. 00:33:23.891 [2024-04-17 10:29:57.039846] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.891 [2024-04-17 10:29:57.039964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.891 [2024-04-17 10:29:57.039986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.891 [2024-04-17 10:29:57.039996] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.891 [2024-04-17 10:29:57.040005] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.891 [2024-04-17 10:29:57.040025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.891 qpair failed and we were unable to recover it. 00:33:23.891 [2024-04-17 10:29:57.049915] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.891 [2024-04-17 10:29:57.050002] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.891 [2024-04-17 10:29:57.050023] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.891 [2024-04-17 10:29:57.050032] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.891 [2024-04-17 10:29:57.050041] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.891 [2024-04-17 10:29:57.050059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.891 qpair failed and we were unable to recover it. 00:33:23.891 [2024-04-17 10:29:57.059930] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.891 [2024-04-17 10:29:57.060047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.891 [2024-04-17 10:29:57.060068] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.891 [2024-04-17 10:29:57.060079] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.891 [2024-04-17 10:29:57.060088] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.891 [2024-04-17 10:29:57.060107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.891 qpair failed and we were unable to recover it. 00:33:23.891 [2024-04-17 10:29:57.069968] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.891 [2024-04-17 10:29:57.070057] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.891 [2024-04-17 10:29:57.070078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.891 [2024-04-17 10:29:57.070088] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.891 [2024-04-17 10:29:57.070097] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.891 [2024-04-17 10:29:57.070116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.891 qpair failed and we were unable to recover it. 00:33:23.891 [2024-04-17 10:29:57.080018] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.891 [2024-04-17 10:29:57.080107] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.891 [2024-04-17 10:29:57.080131] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.891 [2024-04-17 10:29:57.080142] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.891 [2024-04-17 10:29:57.080151] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.891 [2024-04-17 10:29:57.080169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.891 qpair failed and we were unable to recover it. 00:33:23.891 [2024-04-17 10:29:57.090032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.891 [2024-04-17 10:29:57.090117] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.891 [2024-04-17 10:29:57.090138] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.891 [2024-04-17 10:29:57.090149] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.891 [2024-04-17 10:29:57.090158] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.891 [2024-04-17 10:29:57.090176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.891 qpair failed and we were unable to recover it. 00:33:23.891 [2024-04-17 10:29:57.100005] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.891 [2024-04-17 10:29:57.100094] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.891 [2024-04-17 10:29:57.100115] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.891 [2024-04-17 10:29:57.100126] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.891 [2024-04-17 10:29:57.100135] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.891 [2024-04-17 10:29:57.100153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.891 qpair failed and we were unable to recover it. 00:33:23.891 [2024-04-17 10:29:57.110065] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.891 [2024-04-17 10:29:57.110156] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.891 [2024-04-17 10:29:57.110176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.891 [2024-04-17 10:29:57.110186] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.891 [2024-04-17 10:29:57.110195] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.891 [2024-04-17 10:29:57.110213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.891 qpair failed and we were unable to recover it. 00:33:23.891 [2024-04-17 10:29:57.120137] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.891 [2024-04-17 10:29:57.120233] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.891 [2024-04-17 10:29:57.120253] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.891 [2024-04-17 10:29:57.120263] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.891 [2024-04-17 10:29:57.120272] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.891 [2024-04-17 10:29:57.120291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.891 qpair failed and we were unable to recover it. 00:33:23.891 [2024-04-17 10:29:57.130164] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.891 [2024-04-17 10:29:57.130254] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.891 [2024-04-17 10:29:57.130275] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.891 [2024-04-17 10:29:57.130285] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.891 [2024-04-17 10:29:57.130294] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.891 [2024-04-17 10:29:57.130312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.891 qpair failed and we were unable to recover it. 00:33:23.891 [2024-04-17 10:29:57.140204] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.891 [2024-04-17 10:29:57.140298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.891 [2024-04-17 10:29:57.140319] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.891 [2024-04-17 10:29:57.140328] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.891 [2024-04-17 10:29:57.140338] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.891 [2024-04-17 10:29:57.140357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.891 qpair failed and we were unable to recover it. 00:33:23.891 [2024-04-17 10:29:57.150163] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.891 [2024-04-17 10:29:57.150282] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.891 [2024-04-17 10:29:57.150302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.891 [2024-04-17 10:29:57.150312] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.891 [2024-04-17 10:29:57.150321] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.891 [2024-04-17 10:29:57.150340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.891 qpair failed and we were unable to recover it. 00:33:23.891 [2024-04-17 10:29:57.160266] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.891 [2024-04-17 10:29:57.160364] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.891 [2024-04-17 10:29:57.160386] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.891 [2024-04-17 10:29:57.160396] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.891 [2024-04-17 10:29:57.160404] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.891 [2024-04-17 10:29:57.160423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.891 qpair failed and we were unable to recover it. 00:33:23.891 [2024-04-17 10:29:57.170247] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.891 [2024-04-17 10:29:57.170338] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.892 [2024-04-17 10:29:57.170362] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.892 [2024-04-17 10:29:57.170372] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.892 [2024-04-17 10:29:57.170382] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.892 [2024-04-17 10:29:57.170401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.892 qpair failed and we were unable to recover it. 00:33:23.892 [2024-04-17 10:29:57.180331] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.892 [2024-04-17 10:29:57.180424] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.892 [2024-04-17 10:29:57.180445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.892 [2024-04-17 10:29:57.180455] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.892 [2024-04-17 10:29:57.180464] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.892 [2024-04-17 10:29:57.180482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.892 qpair failed and we were unable to recover it. 00:33:23.892 [2024-04-17 10:29:57.190394] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.892 [2024-04-17 10:29:57.190524] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.892 [2024-04-17 10:29:57.190545] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.892 [2024-04-17 10:29:57.190555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.892 [2024-04-17 10:29:57.190563] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.892 [2024-04-17 10:29:57.190582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.892 qpair failed and we were unable to recover it. 00:33:23.892 [2024-04-17 10:29:57.200394] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.892 [2024-04-17 10:29:57.200530] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.892 [2024-04-17 10:29:57.200551] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.892 [2024-04-17 10:29:57.200561] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.892 [2024-04-17 10:29:57.200571] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.892 [2024-04-17 10:29:57.200590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.892 qpair failed and we were unable to recover it. 00:33:23.892 [2024-04-17 10:29:57.210345] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.892 [2024-04-17 10:29:57.210436] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.892 [2024-04-17 10:29:57.210457] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.892 [2024-04-17 10:29:57.210467] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.892 [2024-04-17 10:29:57.210477] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.892 [2024-04-17 10:29:57.210499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.892 qpair failed and we were unable to recover it. 00:33:23.892 [2024-04-17 10:29:57.220435] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:23.892 [2024-04-17 10:29:57.220521] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:23.892 [2024-04-17 10:29:57.220542] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:23.892 [2024-04-17 10:29:57.220552] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:23.892 [2024-04-17 10:29:57.220561] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:23.892 [2024-04-17 10:29:57.220580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:23.892 qpair failed and we were unable to recover it. 00:33:24.152 [2024-04-17 10:29:57.230403] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.152 [2024-04-17 10:29:57.230497] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.152 [2024-04-17 10:29:57.230518] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.152 [2024-04-17 10:29:57.230528] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.152 [2024-04-17 10:29:57.230537] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.152 [2024-04-17 10:29:57.230555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.152 qpair failed and we were unable to recover it. 00:33:24.152 [2024-04-17 10:29:57.240506] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.152 [2024-04-17 10:29:57.240630] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.152 [2024-04-17 10:29:57.240656] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.152 [2024-04-17 10:29:57.240666] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.152 [2024-04-17 10:29:57.240675] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.152 [2024-04-17 10:29:57.240694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.152 qpair failed and we were unable to recover it. 00:33:24.152 [2024-04-17 10:29:57.250577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.152 [2024-04-17 10:29:57.250680] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.152 [2024-04-17 10:29:57.250702] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.152 [2024-04-17 10:29:57.250712] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.152 [2024-04-17 10:29:57.250721] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.152 [2024-04-17 10:29:57.250740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.152 qpair failed and we were unable to recover it. 00:33:24.152 [2024-04-17 10:29:57.260560] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.152 [2024-04-17 10:29:57.260659] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.152 [2024-04-17 10:29:57.260687] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.152 [2024-04-17 10:29:57.260698] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.152 [2024-04-17 10:29:57.260707] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.152 [2024-04-17 10:29:57.260726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.152 qpair failed and we were unable to recover it. 00:33:24.152 [2024-04-17 10:29:57.270606] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.152 [2024-04-17 10:29:57.270708] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.152 [2024-04-17 10:29:57.270729] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.152 [2024-04-17 10:29:57.270739] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.152 [2024-04-17 10:29:57.270748] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.152 [2024-04-17 10:29:57.270766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.152 qpair failed and we were unable to recover it. 00:33:24.152 [2024-04-17 10:29:57.280627] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.152 [2024-04-17 10:29:57.280725] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.152 [2024-04-17 10:29:57.280745] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.152 [2024-04-17 10:29:57.280756] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.152 [2024-04-17 10:29:57.280765] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.152 [2024-04-17 10:29:57.280784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.152 qpair failed and we were unable to recover it. 00:33:24.152 [2024-04-17 10:29:57.290586] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.152 [2024-04-17 10:29:57.290683] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.152 [2024-04-17 10:29:57.290704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.152 [2024-04-17 10:29:57.290714] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.152 [2024-04-17 10:29:57.290723] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.152 [2024-04-17 10:29:57.290742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.152 qpair failed and we were unable to recover it. 00:33:24.152 [2024-04-17 10:29:57.300698] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.152 [2024-04-17 10:29:57.300794] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.152 [2024-04-17 10:29:57.300816] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.152 [2024-04-17 10:29:57.300826] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.152 [2024-04-17 10:29:57.300835] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.152 [2024-04-17 10:29:57.300858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.152 qpair failed and we were unable to recover it. 00:33:24.152 [2024-04-17 10:29:57.310785] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.152 [2024-04-17 10:29:57.310877] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.152 [2024-04-17 10:29:57.310897] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.152 [2024-04-17 10:29:57.310907] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.152 [2024-04-17 10:29:57.310916] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.152 [2024-04-17 10:29:57.310935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.152 qpair failed and we were unable to recover it. 00:33:24.152 [2024-04-17 10:29:57.320763] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.152 [2024-04-17 10:29:57.320855] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.153 [2024-04-17 10:29:57.320875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.153 [2024-04-17 10:29:57.320885] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.153 [2024-04-17 10:29:57.320894] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.153 [2024-04-17 10:29:57.320912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.153 qpair failed and we were unable to recover it. 00:33:24.153 [2024-04-17 10:29:57.330806] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.153 [2024-04-17 10:29:57.330901] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.153 [2024-04-17 10:29:57.330921] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.153 [2024-04-17 10:29:57.330931] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.153 [2024-04-17 10:29:57.330939] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.153 [2024-04-17 10:29:57.330958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.153 qpair failed and we were unable to recover it. 00:33:24.153 [2024-04-17 10:29:57.340794] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.153 [2024-04-17 10:29:57.340894] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.153 [2024-04-17 10:29:57.340915] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.153 [2024-04-17 10:29:57.340925] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.153 [2024-04-17 10:29:57.340933] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.153 [2024-04-17 10:29:57.340952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.153 qpair failed and we were unable to recover it. 00:33:24.153 [2024-04-17 10:29:57.350863] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.153 [2024-04-17 10:29:57.350954] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.153 [2024-04-17 10:29:57.350982] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.153 [2024-04-17 10:29:57.350992] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.153 [2024-04-17 10:29:57.351001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.153 [2024-04-17 10:29:57.351021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.153 qpair failed and we were unable to recover it. 00:33:24.153 [2024-04-17 10:29:57.360815] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.153 [2024-04-17 10:29:57.360903] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.153 [2024-04-17 10:29:57.360924] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.153 [2024-04-17 10:29:57.360935] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.153 [2024-04-17 10:29:57.360944] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.153 [2024-04-17 10:29:57.360962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.153 qpair failed and we were unable to recover it. 00:33:24.153 [2024-04-17 10:29:57.370946] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.153 [2024-04-17 10:29:57.371036] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.153 [2024-04-17 10:29:57.371057] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.153 [2024-04-17 10:29:57.371067] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.153 [2024-04-17 10:29:57.371076] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.153 [2024-04-17 10:29:57.371095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.153 qpair failed and we were unable to recover it. 00:33:24.153 [2024-04-17 10:29:57.380956] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.153 [2024-04-17 10:29:57.381045] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.153 [2024-04-17 10:29:57.381067] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.153 [2024-04-17 10:29:57.381076] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.153 [2024-04-17 10:29:57.381085] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.153 [2024-04-17 10:29:57.381105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.153 qpair failed and we were unable to recover it. 00:33:24.153 [2024-04-17 10:29:57.390919] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.153 [2024-04-17 10:29:57.391012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.153 [2024-04-17 10:29:57.391033] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.153 [2024-04-17 10:29:57.391043] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.153 [2024-04-17 10:29:57.391052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.153 [2024-04-17 10:29:57.391074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.153 qpair failed and we were unable to recover it. 00:33:24.153 [2024-04-17 10:29:57.401000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.153 [2024-04-17 10:29:57.401094] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.153 [2024-04-17 10:29:57.401115] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.153 [2024-04-17 10:29:57.401125] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.153 [2024-04-17 10:29:57.401134] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.153 [2024-04-17 10:29:57.401153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.153 qpair failed and we were unable to recover it. 00:33:24.153 [2024-04-17 10:29:57.410968] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.153 [2024-04-17 10:29:57.411059] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.153 [2024-04-17 10:29:57.411080] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.153 [2024-04-17 10:29:57.411090] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.153 [2024-04-17 10:29:57.411099] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.153 [2024-04-17 10:29:57.411118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.153 qpair failed and we were unable to recover it. 00:33:24.153 [2024-04-17 10:29:57.421048] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.153 [2024-04-17 10:29:57.421135] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.153 [2024-04-17 10:29:57.421155] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.153 [2024-04-17 10:29:57.421165] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.153 [2024-04-17 10:29:57.421174] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.153 [2024-04-17 10:29:57.421193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.153 qpair failed and we were unable to recover it. 00:33:24.153 [2024-04-17 10:29:57.431062] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.153 [2024-04-17 10:29:57.431152] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.153 [2024-04-17 10:29:57.431172] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.153 [2024-04-17 10:29:57.431182] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.153 [2024-04-17 10:29:57.431191] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.153 [2024-04-17 10:29:57.431210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.153 qpair failed and we were unable to recover it. 00:33:24.153 [2024-04-17 10:29:57.441052] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.153 [2024-04-17 10:29:57.441134] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.153 [2024-04-17 10:29:57.441158] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.153 [2024-04-17 10:29:57.441168] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.153 [2024-04-17 10:29:57.441177] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.153 [2024-04-17 10:29:57.441195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.153 qpair failed and we were unable to recover it. 00:33:24.153 [2024-04-17 10:29:57.451102] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.153 [2024-04-17 10:29:57.451197] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.153 [2024-04-17 10:29:57.451217] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.153 [2024-04-17 10:29:57.451227] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.153 [2024-04-17 10:29:57.451236] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.154 [2024-04-17 10:29:57.451255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.154 qpair failed and we were unable to recover it. 00:33:24.154 [2024-04-17 10:29:57.461203] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.154 [2024-04-17 10:29:57.461303] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.154 [2024-04-17 10:29:57.461324] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.154 [2024-04-17 10:29:57.461334] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.154 [2024-04-17 10:29:57.461343] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.154 [2024-04-17 10:29:57.461362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.154 qpair failed and we were unable to recover it. 00:33:24.154 [2024-04-17 10:29:57.471191] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.154 [2024-04-17 10:29:57.471279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.154 [2024-04-17 10:29:57.471299] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.154 [2024-04-17 10:29:57.471309] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.154 [2024-04-17 10:29:57.471318] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.154 [2024-04-17 10:29:57.471337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.154 qpair failed and we were unable to recover it. 00:33:24.154 [2024-04-17 10:29:57.481212] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.154 [2024-04-17 10:29:57.481294] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.154 [2024-04-17 10:29:57.481315] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.154 [2024-04-17 10:29:57.481325] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.154 [2024-04-17 10:29:57.481333] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.154 [2024-04-17 10:29:57.481356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.154 qpair failed and we were unable to recover it. 00:33:24.413 [2024-04-17 10:29:57.491249] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.413 [2024-04-17 10:29:57.491359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.413 [2024-04-17 10:29:57.491380] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.413 [2024-04-17 10:29:57.491390] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.413 [2024-04-17 10:29:57.491399] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.413 [2024-04-17 10:29:57.491417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.413 qpair failed and we were unable to recover it. 00:33:24.413 [2024-04-17 10:29:57.501325] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.413 [2024-04-17 10:29:57.501452] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.413 [2024-04-17 10:29:57.501472] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.413 [2024-04-17 10:29:57.501482] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.413 [2024-04-17 10:29:57.501491] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.413 [2024-04-17 10:29:57.501510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.413 qpair failed and we were unable to recover it. 00:33:24.413 [2024-04-17 10:29:57.511417] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.413 [2024-04-17 10:29:57.511510] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.413 [2024-04-17 10:29:57.511530] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.413 [2024-04-17 10:29:57.511541] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.413 [2024-04-17 10:29:57.511549] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2b60 00:33:24.413 [2024-04-17 10:29:57.511568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:24.413 qpair failed and we were unable to recover it. 00:33:24.413 [2024-04-17 10:29:57.521345] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.413 [2024-04-17 10:29:57.521424] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.413 [2024-04-17 10:29:57.521444] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.413 [2024-04-17 10:29:57.521452] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.413 [2024-04-17 10:29:57.521458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:24.413 [2024-04-17 10:29:57.521474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:24.413 qpair failed and we were unable to recover it. 00:33:24.413 [2024-04-17 10:29:57.531358] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.413 [2024-04-17 10:29:57.531441] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.413 [2024-04-17 10:29:57.531460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.413 [2024-04-17 10:29:57.531466] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.414 [2024-04-17 10:29:57.531472] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2080000b90 00:33:24.414 [2024-04-17 10:29:57.531486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:24.414 qpair failed and we were unable to recover it. 00:33:24.414 [2024-04-17 10:29:57.531771] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0650 is same with the state(5) to be set 00:33:24.414 [2024-04-17 10:29:57.541455] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.414 [2024-04-17 10:29:57.541561] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.414 [2024-04-17 10:29:57.541594] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.414 [2024-04-17 10:29:57.541608] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.414 [2024-04-17 10:29:57.541620] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2088000b90 00:33:24.414 [2024-04-17 10:29:57.541655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:24.414 qpair failed and we were unable to recover it. 00:33:24.414 [2024-04-17 10:29:57.551468] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.414 [2024-04-17 10:29:57.551589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.414 [2024-04-17 10:29:57.551612] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.414 [2024-04-17 10:29:57.551623] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.414 [2024-04-17 10:29:57.551632] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2088000b90 00:33:24.414 [2024-04-17 10:29:57.551661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:24.414 qpair failed and we were unable to recover it. 00:33:24.414 [2024-04-17 10:29:57.561517] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.414 [2024-04-17 10:29:57.561693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.414 [2024-04-17 10:29:57.561748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.414 [2024-04-17 10:29:57.561775] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.414 [2024-04-17 10:29:57.561794] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2078000b90 00:33:24.414 [2024-04-17 10:29:57.561842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.414 qpair failed and we were unable to recover it. 00:33:24.414 [2024-04-17 10:29:57.571558] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:24.414 [2024-04-17 10:29:57.571679] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:24.414 [2024-04-17 10:29:57.571710] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:24.414 [2024-04-17 10:29:57.571730] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:24.414 [2024-04-17 10:29:57.571743] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2078000b90 00:33:24.414 [2024-04-17 10:29:57.571774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:24.414 qpair failed and we were unable to recover it. 00:33:24.414 [2024-04-17 10:29:57.572078] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b0650 (9): Bad file descriptor 00:33:24.414 Initializing NVMe Controllers 00:33:24.414 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:24.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:24.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:24.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:24.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:24.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:24.414 Initialization complete. Launching workers. 00:33:24.414 Starting thread on core 1 00:33:24.414 Starting thread on core 2 00:33:24.414 Starting thread on core 3 00:33:24.414 Starting thread on core 0 00:33:24.414 10:29:57 -- host/target_disconnect.sh@59 -- # sync 00:33:24.414 00:33:24.414 real 0m11.500s 00:33:24.414 user 0m21.317s 00:33:24.414 sys 0m4.168s 00:33:24.414 10:29:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:24.414 10:29:57 -- common/autotest_common.sh@10 -- # set +x 00:33:24.414 ************************************ 00:33:24.414 END TEST nvmf_target_disconnect_tc2 00:33:24.414 ************************************ 00:33:24.414 10:29:57 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:33:24.414 10:29:57 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:33:24.414 10:29:57 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:33:24.414 10:29:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:24.414 10:29:57 -- nvmf/common.sh@116 -- # sync 00:33:24.414 10:29:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:24.414 10:29:57 -- nvmf/common.sh@119 -- # set +e 00:33:24.414 10:29:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:24.414 10:29:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:24.414 rmmod nvme_tcp 00:33:24.414 rmmod nvme_fabrics 00:33:24.414 rmmod nvme_keyring 00:33:24.414 10:29:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:24.414 10:29:57 -- nvmf/common.sh@123 -- # set -e 00:33:24.414 10:29:57 -- nvmf/common.sh@124 -- # return 0 00:33:24.414 10:29:57 -- nvmf/common.sh@477 -- # '[' -n 3656818 ']' 00:33:24.414 10:29:57 -- nvmf/common.sh@478 -- # killprocess 3656818 00:33:24.414 10:29:57 -- common/autotest_common.sh@926 -- # '[' -z 3656818 ']' 00:33:24.414 10:29:57 -- common/autotest_common.sh@930 -- # kill -0 3656818 00:33:24.414 10:29:57 -- common/autotest_common.sh@931 -- # uname 00:33:24.414 10:29:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:24.414 10:29:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3656818 00:33:24.414 10:29:57 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:33:24.414 10:29:57 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:33:24.414 10:29:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3656818' 00:33:24.414 killing process with pid 3656818 00:33:24.414 10:29:57 -- common/autotest_common.sh@945 -- # kill 3656818 00:33:24.414 10:29:57 -- common/autotest_common.sh@950 -- # wait 3656818 00:33:24.673 10:29:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:24.673 10:29:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:24.673 10:29:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:24.673 10:29:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:24.673 10:29:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:24.673 10:29:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:24.673 10:29:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:24.673 10:29:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.211 10:30:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:27.211 00:33:27.211 real 0m19.942s 00:33:27.211 user 0m49.134s 00:33:27.211 sys 0m8.794s 00:33:27.211 10:30:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:27.211 10:30:00 -- common/autotest_common.sh@10 -- # set +x 00:33:27.211 ************************************ 00:33:27.211 END TEST nvmf_target_disconnect 00:33:27.211 ************************************ 00:33:27.211 10:30:00 -- nvmf/nvmf.sh@126 -- # timing_exit host 00:33:27.211 10:30:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:27.211 10:30:00 -- common/autotest_common.sh@10 -- # set +x 00:33:27.211 10:30:00 -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:33:27.211 00:33:27.211 real 23m51.163s 00:33:27.211 user 65m0.627s 00:33:27.211 sys 6m4.759s 00:33:27.211 10:30:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:27.211 10:30:00 -- common/autotest_common.sh@10 -- # set +x 00:33:27.211 ************************************ 00:33:27.211 END TEST nvmf_tcp 00:33:27.211 ************************************ 00:33:27.211 10:30:00 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:33:27.211 10:30:00 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:27.211 10:30:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:27.211 10:30:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:27.211 10:30:00 -- common/autotest_common.sh@10 -- # set +x 00:33:27.211 ************************************ 00:33:27.211 START TEST spdkcli_nvmf_tcp 00:33:27.212 ************************************ 00:33:27.212 10:30:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:27.212 * Looking for test storage... 00:33:27.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:27.212 10:30:00 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:27.212 10:30:00 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:27.212 10:30:00 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:27.212 10:30:00 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:27.212 10:30:00 -- nvmf/common.sh@7 -- # uname -s 00:33:27.212 10:30:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:27.212 10:30:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:27.212 10:30:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:27.212 10:30:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:27.212 10:30:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:27.212 10:30:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:27.212 10:30:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:27.212 10:30:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:27.212 10:30:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:27.212 10:30:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:27.212 10:30:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:33:27.212 10:30:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:33:27.212 10:30:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:27.212 10:30:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:27.212 10:30:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:27.212 10:30:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:27.212 10:30:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:27.212 10:30:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:27.212 10:30:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:27.212 10:30:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.212 10:30:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.212 10:30:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.212 10:30:00 -- paths/export.sh@5 -- # export PATH 00:33:27.212 10:30:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.212 10:30:00 -- nvmf/common.sh@46 -- # : 0 00:33:27.212 10:30:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:27.212 10:30:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:27.212 10:30:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:27.212 10:30:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:27.212 10:30:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:27.212 10:30:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:27.212 10:30:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:27.212 10:30:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:27.212 10:30:00 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:27.212 10:30:00 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:27.212 10:30:00 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:27.212 10:30:00 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:27.212 10:30:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:27.212 10:30:00 -- common/autotest_common.sh@10 -- # set +x 00:33:27.212 10:30:00 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:27.212 10:30:00 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3658595 00:33:27.212 10:30:00 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:27.212 10:30:00 -- spdkcli/common.sh@34 -- # waitforlisten 3658595 00:33:27.212 10:30:00 -- common/autotest_common.sh@819 -- # '[' -z 3658595 ']' 00:33:27.212 10:30:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:27.212 10:30:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:27.212 10:30:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:27.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:27.212 10:30:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:27.212 10:30:00 -- common/autotest_common.sh@10 -- # set +x 00:33:27.212 [2024-04-17 10:30:00.326155] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:27.212 [2024-04-17 10:30:00.326217] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658595 ] 00:33:27.212 EAL: No free 2048 kB hugepages reported on node 1 00:33:27.212 [2024-04-17 10:30:00.407714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:27.212 [2024-04-17 10:30:00.499076] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:27.212 [2024-04-17 10:30:00.499244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:27.212 [2024-04-17 10:30:00.499250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.148 10:30:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:28.148 10:30:01 -- common/autotest_common.sh@852 -- # return 0 00:33:28.148 10:30:01 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:28.148 10:30:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:28.148 10:30:01 -- common/autotest_common.sh@10 -- # set +x 00:33:28.148 10:30:01 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:28.148 10:30:01 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:28.148 10:30:01 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:28.148 10:30:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:28.148 10:30:01 -- common/autotest_common.sh@10 -- # set +x 00:33:28.148 10:30:01 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:28.148 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:28.148 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:28.148 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:28.148 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:28.148 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:28.148 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:28.148 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:28.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:28.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:28.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:28.148 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:28.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:28.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:28.148 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:28.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:28.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:28.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:28.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:28.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:28.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:28.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:28.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:28.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:28.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:28.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:28.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:28.148 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:28.148 ' 00:33:28.407 [2024-04-17 10:30:01.699481] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:33:30.939 [2024-04-17 10:30:03.943860] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:32.315 [2024-04-17 10:30:05.228578] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:34.847 [2024-04-17 10:30:07.612626] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:36.753 [2024-04-17 10:30:09.667778] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:38.188 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:38.188 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:38.188 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:38.188 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:38.188 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:38.188 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:38.188 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:38.188 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:38.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:38.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:38.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:38.188 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:38.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:38.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:38.188 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:38.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:38.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:38.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:38.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:38.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:38.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:38.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:38.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:38.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:38.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:38.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:38.188 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:38.188 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:38.188 10:30:11 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:38.188 10:30:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:38.188 10:30:11 -- common/autotest_common.sh@10 -- # set +x 00:33:38.188 10:30:11 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:38.188 10:30:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:38.188 10:30:11 -- common/autotest_common.sh@10 -- # set +x 00:33:38.188 10:30:11 -- spdkcli/nvmf.sh@69 -- # check_match 00:33:38.188 10:30:11 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:38.447 10:30:11 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:38.706 10:30:11 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:38.706 10:30:11 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:38.706 10:30:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:38.706 10:30:11 -- common/autotest_common.sh@10 -- # set +x 00:33:38.706 10:30:11 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:38.706 10:30:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:38.706 10:30:11 -- common/autotest_common.sh@10 -- # set +x 00:33:38.706 10:30:11 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:38.706 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:38.706 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:38.706 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:38.706 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:38.706 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:38.706 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:38.706 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:38.706 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:38.706 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:38.706 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:38.706 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:38.706 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:38.706 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:38.706 ' 00:33:43.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:43.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:43.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:43.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:43.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:43.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:43.978 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:43.978 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:43.978 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:43.978 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:43.978 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:43.978 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:43.978 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:43.978 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:43.978 10:30:16 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:43.978 10:30:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:43.978 10:30:16 -- common/autotest_common.sh@10 -- # set +x 00:33:43.978 10:30:16 -- spdkcli/nvmf.sh@90 -- # killprocess 3658595 00:33:43.978 10:30:16 -- common/autotest_common.sh@926 -- # '[' -z 3658595 ']' 00:33:43.978 10:30:16 -- common/autotest_common.sh@930 -- # kill -0 3658595 00:33:43.978 10:30:16 -- common/autotest_common.sh@931 -- # uname 00:33:43.978 10:30:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:43.978 10:30:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3658595 00:33:43.978 10:30:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:43.978 10:30:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:43.978 10:30:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3658595' 00:33:43.978 killing process with pid 3658595 00:33:43.978 10:30:16 -- common/autotest_common.sh@945 -- # kill 3658595 00:33:43.978 [2024-04-17 10:30:16.924543] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:33:43.978 10:30:16 -- common/autotest_common.sh@950 -- # wait 3658595 00:33:43.978 10:30:17 -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:43.978 10:30:17 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:43.978 10:30:17 -- spdkcli/common.sh@13 -- # '[' -n 3658595 ']' 00:33:43.978 10:30:17 -- spdkcli/common.sh@14 -- # killprocess 3658595 00:33:43.978 10:30:17 -- common/autotest_common.sh@926 -- # '[' -z 3658595 ']' 00:33:43.978 10:30:17 -- common/autotest_common.sh@930 -- # kill -0 3658595 00:33:43.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3658595) - No such process 00:33:43.978 10:30:17 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3658595 is not found' 00:33:43.978 Process with pid 3658595 is not found 00:33:43.978 10:30:17 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:43.978 10:30:17 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:43.978 10:30:17 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:43.978 00:33:43.978 real 0m16.980s 00:33:43.978 user 0m36.363s 00:33:43.978 sys 0m0.846s 00:33:43.978 10:30:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:43.978 10:30:17 -- common/autotest_common.sh@10 -- # set +x 00:33:43.978 ************************************ 00:33:43.978 END TEST spdkcli_nvmf_tcp 00:33:43.978 ************************************ 00:33:43.978 10:30:17 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:43.978 10:30:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:43.978 10:30:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:43.978 10:30:17 -- common/autotest_common.sh@10 -- # set +x 00:33:43.978 ************************************ 00:33:43.978 START TEST nvmf_identify_passthru 00:33:43.978 ************************************ 00:33:43.978 10:30:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:43.978 * Looking for test storage... 00:33:43.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:43.978 10:30:17 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:43.978 10:30:17 -- nvmf/common.sh@7 -- # uname -s 00:33:43.978 10:30:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:43.978 10:30:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:43.978 10:30:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:43.978 10:30:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:43.978 10:30:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:43.978 10:30:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:43.978 10:30:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:43.978 10:30:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:43.978 10:30:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:43.978 10:30:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:43.978 10:30:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:33:43.978 10:30:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:33:43.978 10:30:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:43.978 10:30:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:43.978 10:30:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:43.978 10:30:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:43.978 10:30:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:43.978 10:30:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:43.978 10:30:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:43.978 10:30:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.978 10:30:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.978 10:30:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.978 10:30:17 -- paths/export.sh@5 -- # export PATH 00:33:43.978 10:30:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.978 10:30:17 -- nvmf/common.sh@46 -- # : 0 00:33:43.978 10:30:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:43.978 10:30:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:43.978 10:30:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:43.978 10:30:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:43.978 10:30:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:43.978 10:30:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:43.978 10:30:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:43.978 10:30:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:43.978 10:30:17 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:43.978 10:30:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:43.978 10:30:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:43.978 10:30:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:43.978 10:30:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.978 10:30:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.978 10:30:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.978 10:30:17 -- paths/export.sh@5 -- # export PATH 00:33:43.978 10:30:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.978 10:30:17 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:43.979 10:30:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:43.979 10:30:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:43.979 10:30:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:43.979 10:30:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:43.979 10:30:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:43.979 10:30:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:43.979 10:30:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:43.979 10:30:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:43.979 10:30:17 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:43.979 10:30:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:43.979 10:30:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:43.979 10:30:17 -- common/autotest_common.sh@10 -- # set +x 00:33:50.545 10:30:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:50.545 10:30:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:50.545 10:30:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:50.545 10:30:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:50.545 10:30:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:50.545 10:30:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:50.545 10:30:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:50.545 10:30:22 -- nvmf/common.sh@294 -- # net_devs=() 00:33:50.545 10:30:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:50.545 10:30:22 -- nvmf/common.sh@295 -- # e810=() 00:33:50.545 10:30:22 -- nvmf/common.sh@295 -- # local -ga e810 00:33:50.545 10:30:22 -- nvmf/common.sh@296 -- # x722=() 00:33:50.545 10:30:22 -- nvmf/common.sh@296 -- # local -ga x722 00:33:50.545 10:30:22 -- nvmf/common.sh@297 -- # mlx=() 00:33:50.545 10:30:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:50.545 10:30:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:50.545 10:30:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:50.546 10:30:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:50.546 10:30:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:50.546 10:30:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:50.546 10:30:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:50.546 10:30:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:50.546 10:30:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:50.546 10:30:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:50.546 10:30:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:50.546 10:30:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:50.546 10:30:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:50.546 10:30:22 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:50.546 10:30:22 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:50.546 10:30:22 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:50.546 10:30:22 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:50.546 10:30:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:50.546 10:30:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:50.546 10:30:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:50.546 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:50.546 10:30:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:50.546 10:30:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:50.546 10:30:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:50.546 10:30:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:50.546 10:30:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:50.546 10:30:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:50.546 10:30:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:50.546 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:50.546 10:30:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:50.546 10:30:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:50.546 10:30:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:50.546 10:30:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:50.546 10:30:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:50.546 10:30:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:50.546 10:30:22 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:50.546 10:30:22 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:50.546 10:30:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:50.546 10:30:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:50.546 10:30:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:50.546 10:30:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:50.546 10:30:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:50.546 Found net devices under 0000:af:00.0: cvl_0_0 00:33:50.546 10:30:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:50.546 10:30:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:50.546 10:30:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:50.546 10:30:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:50.546 10:30:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:50.546 10:30:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:50.546 Found net devices under 0000:af:00.1: cvl_0_1 00:33:50.546 10:30:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:50.546 10:30:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:50.546 10:30:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:50.546 10:30:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:50.546 10:30:22 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:50.546 10:30:22 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:50.546 10:30:22 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:50.546 10:30:22 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:50.546 10:30:22 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:50.546 10:30:22 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:50.546 10:30:22 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:50.546 10:30:22 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:50.546 10:30:22 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:50.546 10:30:22 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:50.546 10:30:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:50.546 10:30:22 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:50.546 10:30:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:50.546 10:30:22 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:50.546 10:30:22 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:50.546 10:30:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:50.546 10:30:22 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:50.546 10:30:22 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:50.546 10:30:22 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:50.546 10:30:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:50.546 10:30:22 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:50.546 10:30:22 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:50.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:50.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:33:50.546 00:33:50.546 --- 10.0.0.2 ping statistics --- 00:33:50.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.546 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:33:50.546 10:30:22 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:50.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:50.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:33:50.546 00:33:50.546 --- 10.0.0.1 ping statistics --- 00:33:50.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.546 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:33:50.546 10:30:22 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:50.546 10:30:22 -- nvmf/common.sh@410 -- # return 0 00:33:50.546 10:30:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:50.546 10:30:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:50.546 10:30:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:50.546 10:30:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:50.546 10:30:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:50.546 10:30:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:50.546 10:30:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:50.546 10:30:22 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:50.546 10:30:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:50.546 10:30:22 -- common/autotest_common.sh@10 -- # set +x 00:33:50.546 10:30:22 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:50.546 10:30:22 -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:50.546 10:30:22 -- common/autotest_common.sh@1509 -- # local bdfs 00:33:50.546 10:30:22 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:50.546 10:30:22 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:50.546 10:30:22 -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:50.546 10:30:22 -- common/autotest_common.sh@1498 -- # local bdfs 00:33:50.546 10:30:22 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:50.546 10:30:22 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:50.546 10:30:22 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:50.546 10:30:23 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:50.546 10:30:23 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:86:00.0 00:33:50.546 10:30:23 -- common/autotest_common.sh@1512 -- # echo 0000:86:00.0 00:33:50.546 10:30:23 -- target/identify_passthru.sh@16 -- # bdf=0000:86:00.0 00:33:50.546 10:30:23 -- target/identify_passthru.sh@17 -- # '[' -z 0000:86:00.0 ']' 00:33:50.546 10:30:23 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:50.546 10:30:23 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:33:50.546 10:30:23 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:50.546 EAL: No free 2048 kB hugepages reported on node 1 00:33:54.737 10:30:27 -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ916308MR1P0FGN 00:33:54.738 10:30:27 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:33:54.738 10:30:27 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:54.738 10:30:27 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:54.738 EAL: No free 2048 kB hugepages reported on node 1 00:33:58.928 10:30:31 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:58.928 10:30:31 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:58.928 10:30:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:58.928 10:30:31 -- common/autotest_common.sh@10 -- # set +x 00:33:58.928 10:30:31 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:58.928 10:30:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:58.928 10:30:31 -- common/autotest_common.sh@10 -- # set +x 00:33:58.928 10:30:31 -- target/identify_passthru.sh@31 -- # nvmfpid=3666715 00:33:58.928 10:30:31 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:58.928 10:30:31 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:58.928 10:30:31 -- target/identify_passthru.sh@35 -- # waitforlisten 3666715 00:33:58.928 10:30:31 -- common/autotest_common.sh@819 -- # '[' -z 3666715 ']' 00:33:58.928 10:30:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:58.928 10:30:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:58.928 10:30:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:58.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:58.928 10:30:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:58.928 10:30:31 -- common/autotest_common.sh@10 -- # set +x 00:33:58.928 [2024-04-17 10:30:31.608336] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:33:58.929 [2024-04-17 10:30:31.608391] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:58.929 EAL: No free 2048 kB hugepages reported on node 1 00:33:58.929 [2024-04-17 10:30:31.691576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:58.929 [2024-04-17 10:30:31.779982] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:58.929 [2024-04-17 10:30:31.780122] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:58.929 [2024-04-17 10:30:31.780134] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:58.929 [2024-04-17 10:30:31.780143] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:58.929 [2024-04-17 10:30:31.780199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:58.929 [2024-04-17 10:30:31.780289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:58.929 [2024-04-17 10:30:31.780412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:58.929 [2024-04-17 10:30:31.780412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:59.496 10:30:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:59.496 10:30:32 -- common/autotest_common.sh@852 -- # return 0 00:33:59.496 10:30:32 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:59.496 10:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:59.496 10:30:32 -- common/autotest_common.sh@10 -- # set +x 00:33:59.496 INFO: Log level set to 20 00:33:59.496 INFO: Requests: 00:33:59.496 { 00:33:59.496 "jsonrpc": "2.0", 00:33:59.496 "method": "nvmf_set_config", 00:33:59.496 "id": 1, 00:33:59.496 "params": { 00:33:59.496 "admin_cmd_passthru": { 00:33:59.496 "identify_ctrlr": true 00:33:59.496 } 00:33:59.496 } 00:33:59.496 } 00:33:59.496 00:33:59.496 INFO: response: 00:33:59.496 { 00:33:59.496 "jsonrpc": "2.0", 00:33:59.496 "id": 1, 00:33:59.496 "result": true 00:33:59.496 } 00:33:59.496 00:33:59.496 10:30:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:59.496 10:30:32 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:59.496 10:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:59.496 10:30:32 -- common/autotest_common.sh@10 -- # set +x 00:33:59.496 INFO: Setting log level to 20 00:33:59.496 INFO: Setting log level to 20 00:33:59.496 INFO: Log level set to 20 00:33:59.496 INFO: Log level set to 20 00:33:59.496 INFO: Requests: 00:33:59.496 { 00:33:59.496 "jsonrpc": "2.0", 00:33:59.496 "method": "framework_start_init", 00:33:59.496 "id": 1 00:33:59.496 } 00:33:59.496 00:33:59.496 INFO: Requests: 00:33:59.496 { 00:33:59.496 "jsonrpc": "2.0", 00:33:59.496 "method": "framework_start_init", 00:33:59.496 "id": 1 00:33:59.496 } 00:33:59.496 00:33:59.496 [2024-04-17 10:30:32.631682] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:59.496 INFO: response: 00:33:59.496 { 00:33:59.496 "jsonrpc": "2.0", 00:33:59.496 "id": 1, 00:33:59.496 "result": true 00:33:59.496 } 00:33:59.496 00:33:59.496 INFO: response: 00:33:59.496 { 00:33:59.496 "jsonrpc": "2.0", 00:33:59.496 "id": 1, 00:33:59.496 "result": true 00:33:59.496 } 00:33:59.496 00:33:59.496 10:30:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:59.496 10:30:32 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:59.496 10:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:59.496 10:30:32 -- common/autotest_common.sh@10 -- # set +x 00:33:59.496 INFO: Setting log level to 40 00:33:59.496 INFO: Setting log level to 40 00:33:59.496 INFO: Setting log level to 40 00:33:59.496 [2024-04-17 10:30:32.645513] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:59.496 10:30:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:59.496 10:30:32 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:59.496 10:30:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:59.496 10:30:32 -- common/autotest_common.sh@10 -- # set +x 00:33:59.496 10:30:32 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:86:00.0 00:33:59.496 10:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:59.496 10:30:32 -- common/autotest_common.sh@10 -- # set +x 00:34:02.786 Nvme0n1 00:34:02.786 10:30:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:02.786 10:30:35 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:02.786 10:30:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:02.786 10:30:35 -- common/autotest_common.sh@10 -- # set +x 00:34:02.786 10:30:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:02.786 10:30:35 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:02.786 10:30:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:02.786 10:30:35 -- common/autotest_common.sh@10 -- # set +x 00:34:02.786 10:30:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:02.786 10:30:35 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:02.786 10:30:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:02.786 10:30:35 -- common/autotest_common.sh@10 -- # set +x 00:34:02.786 [2024-04-17 10:30:35.570975] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:02.786 10:30:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:02.786 10:30:35 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:02.786 10:30:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:02.786 10:30:35 -- common/autotest_common.sh@10 -- # set +x 00:34:02.786 [2024-04-17 10:30:35.578730] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:34:02.786 [ 00:34:02.786 { 00:34:02.786 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:02.786 "subtype": "Discovery", 00:34:02.786 "listen_addresses": [], 00:34:02.786 "allow_any_host": true, 00:34:02.786 "hosts": [] 00:34:02.786 }, 00:34:02.786 { 00:34:02.786 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:02.786 "subtype": "NVMe", 00:34:02.786 "listen_addresses": [ 00:34:02.786 { 00:34:02.786 "transport": "TCP", 00:34:02.786 "trtype": "TCP", 00:34:02.786 "adrfam": "IPv4", 00:34:02.786 "traddr": "10.0.0.2", 00:34:02.786 "trsvcid": "4420" 00:34:02.786 } 00:34:02.786 ], 00:34:02.786 "allow_any_host": true, 00:34:02.786 "hosts": [], 00:34:02.786 "serial_number": "SPDK00000000000001", 00:34:02.786 "model_number": "SPDK bdev Controller", 00:34:02.786 "max_namespaces": 1, 00:34:02.786 "min_cntlid": 1, 00:34:02.786 "max_cntlid": 65519, 00:34:02.786 "namespaces": [ 00:34:02.786 { 00:34:02.786 "nsid": 1, 00:34:02.786 "bdev_name": "Nvme0n1", 00:34:02.786 "name": "Nvme0n1", 00:34:02.786 "nguid": "4DBAA3585259452A8F45FB3616E5F793", 00:34:02.786 "uuid": "4dbaa358-5259-452a-8f45-fb3616e5f793" 00:34:02.786 } 00:34:02.786 ] 00:34:02.786 } 00:34:02.786 ] 00:34:02.786 10:30:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:02.786 10:30:35 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:02.786 10:30:35 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:02.786 10:30:35 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:02.786 EAL: No free 2048 kB hugepages reported on node 1 00:34:02.786 10:30:35 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ916308MR1P0FGN 00:34:02.786 10:30:35 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:02.786 10:30:35 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:02.786 10:30:35 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:02.786 EAL: No free 2048 kB hugepages reported on node 1 00:34:02.786 10:30:35 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:02.786 10:30:35 -- target/identify_passthru.sh@63 -- # '[' BTLJ916308MR1P0FGN '!=' BTLJ916308MR1P0FGN ']' 00:34:02.786 10:30:35 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:02.786 10:30:35 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:02.786 10:30:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:02.786 10:30:35 -- common/autotest_common.sh@10 -- # set +x 00:34:02.786 10:30:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:02.786 10:30:35 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:02.786 10:30:35 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:02.786 10:30:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:34:02.786 10:30:35 -- nvmf/common.sh@116 -- # sync 00:34:02.786 10:30:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:34:02.786 10:30:35 -- nvmf/common.sh@119 -- # set +e 00:34:02.786 10:30:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:34:02.786 10:30:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:34:02.786 rmmod nvme_tcp 00:34:02.786 rmmod nvme_fabrics 00:34:02.786 rmmod nvme_keyring 00:34:02.786 10:30:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:34:02.786 10:30:35 -- nvmf/common.sh@123 -- # set -e 00:34:02.786 10:30:35 -- nvmf/common.sh@124 -- # return 0 00:34:02.786 10:30:35 -- nvmf/common.sh@477 -- # '[' -n 3666715 ']' 00:34:02.786 10:30:35 -- nvmf/common.sh@478 -- # killprocess 3666715 00:34:02.786 10:30:35 -- common/autotest_common.sh@926 -- # '[' -z 3666715 ']' 00:34:02.786 10:30:35 -- common/autotest_common.sh@930 -- # kill -0 3666715 00:34:02.787 10:30:35 -- common/autotest_common.sh@931 -- # uname 00:34:02.787 10:30:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:02.787 10:30:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3666715 00:34:02.787 10:30:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:02.787 10:30:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:02.787 10:30:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3666715' 00:34:02.787 killing process with pid 3666715 00:34:02.787 10:30:35 -- common/autotest_common.sh@945 -- # kill 3666715 00:34:02.787 [2024-04-17 10:30:35.935137] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:34:02.787 10:30:35 -- common/autotest_common.sh@950 -- # wait 3666715 00:34:04.691 10:30:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:34:04.691 10:30:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:34:04.691 10:30:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:34:04.691 10:30:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:04.691 10:30:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:34:04.691 10:30:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:04.691 10:30:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:04.691 10:30:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.595 10:30:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:34:06.595 00:34:06.595 real 0m22.389s 00:34:06.595 user 0m30.570s 00:34:06.595 sys 0m5.078s 00:34:06.595 10:30:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:06.595 10:30:39 -- common/autotest_common.sh@10 -- # set +x 00:34:06.595 ************************************ 00:34:06.595 END TEST nvmf_identify_passthru 00:34:06.595 ************************************ 00:34:06.595 10:30:39 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:06.595 10:30:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:06.595 10:30:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:06.595 10:30:39 -- common/autotest_common.sh@10 -- # set +x 00:34:06.595 ************************************ 00:34:06.595 START TEST nvmf_dif 00:34:06.595 ************************************ 00:34:06.595 10:30:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:06.595 * Looking for test storage... 00:34:06.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:06.595 10:30:39 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:06.595 10:30:39 -- nvmf/common.sh@7 -- # uname -s 00:34:06.595 10:30:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:06.595 10:30:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:06.595 10:30:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:06.595 10:30:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:06.595 10:30:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:06.595 10:30:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:06.595 10:30:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:06.595 10:30:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:06.595 10:30:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:06.595 10:30:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:06.595 10:30:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:34:06.595 10:30:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:34:06.595 10:30:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:06.595 10:30:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:06.595 10:30:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:06.595 10:30:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:06.595 10:30:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:06.595 10:30:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:06.595 10:30:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:06.595 10:30:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.595 10:30:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.595 10:30:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.595 10:30:39 -- paths/export.sh@5 -- # export PATH 00:34:06.595 10:30:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.595 10:30:39 -- nvmf/common.sh@46 -- # : 0 00:34:06.595 10:30:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:34:06.595 10:30:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:34:06.595 10:30:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:34:06.595 10:30:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:06.595 10:30:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:06.595 10:30:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:34:06.595 10:30:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:34:06.595 10:30:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:34:06.595 10:30:39 -- target/dif.sh@15 -- # NULL_META=16 00:34:06.595 10:30:39 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:06.595 10:30:39 -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:06.595 10:30:39 -- target/dif.sh@15 -- # NULL_DIF=1 00:34:06.595 10:30:39 -- target/dif.sh@135 -- # nvmftestinit 00:34:06.595 10:30:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:34:06.595 10:30:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:06.595 10:30:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:34:06.595 10:30:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:34:06.595 10:30:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:34:06.595 10:30:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:06.595 10:30:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:06.595 10:30:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.595 10:30:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:34:06.595 10:30:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:34:06.595 10:30:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:34:06.595 10:30:39 -- common/autotest_common.sh@10 -- # set +x 00:34:11.867 10:30:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:34:11.867 10:30:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:34:11.867 10:30:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:34:11.867 10:30:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:34:11.867 10:30:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:34:11.867 10:30:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:34:11.867 10:30:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:34:11.867 10:30:45 -- nvmf/common.sh@294 -- # net_devs=() 00:34:11.867 10:30:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:34:11.867 10:30:45 -- nvmf/common.sh@295 -- # e810=() 00:34:11.867 10:30:45 -- nvmf/common.sh@295 -- # local -ga e810 00:34:11.867 10:30:45 -- nvmf/common.sh@296 -- # x722=() 00:34:11.867 10:30:45 -- nvmf/common.sh@296 -- # local -ga x722 00:34:11.867 10:30:45 -- nvmf/common.sh@297 -- # mlx=() 00:34:11.867 10:30:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:34:11.867 10:30:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:11.867 10:30:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:11.867 10:30:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:11.867 10:30:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:11.867 10:30:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:11.867 10:30:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:11.867 10:30:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:11.867 10:30:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:11.867 10:30:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:11.867 10:30:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:11.867 10:30:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:11.867 10:30:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:34:11.867 10:30:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:34:11.867 10:30:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:34:11.867 10:30:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:34:11.867 10:30:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:34:11.867 10:30:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:34:11.867 10:30:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:11.867 10:30:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:11.867 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:11.867 10:30:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:11.867 10:30:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:11.867 10:30:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.867 10:30:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.867 10:30:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:11.867 10:30:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:11.867 10:30:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:11.867 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:11.867 10:30:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:11.867 10:30:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:11.867 10:30:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.867 10:30:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.867 10:30:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:11.867 10:30:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:34:11.867 10:30:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:34:11.867 10:30:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:34:11.867 10:30:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:11.867 10:30:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.867 10:30:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:11.867 10:30:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.867 10:30:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:11.867 Found net devices under 0000:af:00.0: cvl_0_0 00:34:11.867 10:30:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.867 10:30:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:11.867 10:30:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.867 10:30:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:11.867 10:30:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.867 10:30:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:11.867 Found net devices under 0000:af:00.1: cvl_0_1 00:34:11.867 10:30:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.867 10:30:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:34:11.867 10:30:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:34:11.867 10:30:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:34:11.867 10:30:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:34:11.867 10:30:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:34:11.867 10:30:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:11.867 10:30:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:11.867 10:30:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:11.867 10:30:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:34:11.867 10:30:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:11.867 10:30:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:11.867 10:30:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:34:11.867 10:30:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:11.867 10:30:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:11.867 10:30:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:34:11.867 10:30:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:34:11.867 10:30:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:34:11.867 10:30:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:12.127 10:30:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:12.127 10:30:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:12.127 10:30:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:34:12.127 10:30:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:12.127 10:30:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:12.127 10:30:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:12.127 10:30:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:34:12.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:12.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:34:12.127 00:34:12.127 --- 10.0.0.2 ping statistics --- 00:34:12.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:12.127 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:34:12.127 10:30:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:12.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:12.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:34:12.127 00:34:12.127 --- 10.0.0.1 ping statistics --- 00:34:12.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:12.127 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:34:12.127 10:30:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:12.127 10:30:45 -- nvmf/common.sh@410 -- # return 0 00:34:12.127 10:30:45 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:34:12.127 10:30:45 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:14.661 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:14.661 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:14.661 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:14.661 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:14.661 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:14.661 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:14.661 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:14.661 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:14.661 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:14.661 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:14.661 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:14.661 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:14.661 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:14.661 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:14.661 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:14.661 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:14.661 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:14.919 10:30:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:14.919 10:30:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:34:14.919 10:30:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:34:14.919 10:30:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:14.919 10:30:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:34:14.919 10:30:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:34:14.919 10:30:48 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:14.919 10:30:48 -- target/dif.sh@137 -- # nvmfappstart 00:34:14.919 10:30:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:34:14.919 10:30:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:14.919 10:30:48 -- common/autotest_common.sh@10 -- # set +x 00:34:14.919 10:30:48 -- nvmf/common.sh@469 -- # nvmfpid=3672590 00:34:14.919 10:30:48 -- nvmf/common.sh@470 -- # waitforlisten 3672590 00:34:14.919 10:30:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:14.919 10:30:48 -- common/autotest_common.sh@819 -- # '[' -z 3672590 ']' 00:34:14.919 10:30:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:14.919 10:30:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:14.919 10:30:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:14.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:14.919 10:30:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:14.919 10:30:48 -- common/autotest_common.sh@10 -- # set +x 00:34:14.919 [2024-04-17 10:30:48.218892] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:34:14.919 [2024-04-17 10:30:48.218946] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:15.178 EAL: No free 2048 kB hugepages reported on node 1 00:34:15.178 [2024-04-17 10:30:48.304572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:15.178 [2024-04-17 10:30:48.391420] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:15.178 [2024-04-17 10:30:48.391560] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:15.178 [2024-04-17 10:30:48.391572] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:15.178 [2024-04-17 10:30:48.391581] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:15.178 [2024-04-17 10:30:48.391601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.114 10:30:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:16.114 10:30:49 -- common/autotest_common.sh@852 -- # return 0 00:34:16.114 10:30:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:34:16.114 10:30:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:16.114 10:30:49 -- common/autotest_common.sh@10 -- # set +x 00:34:16.114 10:30:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:16.114 10:30:49 -- target/dif.sh@139 -- # create_transport 00:34:16.114 10:30:49 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:16.114 10:30:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.114 10:30:49 -- common/autotest_common.sh@10 -- # set +x 00:34:16.114 [2024-04-17 10:30:49.188348] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:16.114 10:30:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.114 10:30:49 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:16.114 10:30:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:16.114 10:30:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:16.114 10:30:49 -- common/autotest_common.sh@10 -- # set +x 00:34:16.114 ************************************ 00:34:16.114 START TEST fio_dif_1_default 00:34:16.114 ************************************ 00:34:16.114 10:30:49 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:34:16.114 10:30:49 -- target/dif.sh@86 -- # create_subsystems 0 00:34:16.114 10:30:49 -- target/dif.sh@28 -- # local sub 00:34:16.114 10:30:49 -- target/dif.sh@30 -- # for sub in "$@" 00:34:16.114 10:30:49 -- target/dif.sh@31 -- # create_subsystem 0 00:34:16.114 10:30:49 -- target/dif.sh@18 -- # local sub_id=0 00:34:16.114 10:30:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:16.114 10:30:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.114 10:30:49 -- common/autotest_common.sh@10 -- # set +x 00:34:16.114 bdev_null0 00:34:16.114 10:30:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.114 10:30:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:16.114 10:30:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.114 10:30:49 -- common/autotest_common.sh@10 -- # set +x 00:34:16.114 10:30:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.114 10:30:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:16.114 10:30:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.114 10:30:49 -- common/autotest_common.sh@10 -- # set +x 00:34:16.114 10:30:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.114 10:30:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:16.114 10:30:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.114 10:30:49 -- common/autotest_common.sh@10 -- # set +x 00:34:16.114 [2024-04-17 10:30:49.232594] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:16.115 10:30:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.115 10:30:49 -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:16.115 10:30:49 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:16.115 10:30:49 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:16.115 10:30:49 -- nvmf/common.sh@520 -- # config=() 00:34:16.115 10:30:49 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:16.115 10:30:49 -- nvmf/common.sh@520 -- # local subsystem config 00:34:16.115 10:30:49 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:16.115 10:30:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:16.115 10:30:49 -- target/dif.sh@82 -- # gen_fio_conf 00:34:16.115 10:30:49 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:16.115 10:30:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:16.115 { 00:34:16.115 "params": { 00:34:16.115 "name": "Nvme$subsystem", 00:34:16.115 "trtype": "$TEST_TRANSPORT", 00:34:16.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:16.115 "adrfam": "ipv4", 00:34:16.115 "trsvcid": "$NVMF_PORT", 00:34:16.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:16.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:16.115 "hdgst": ${hdgst:-false}, 00:34:16.115 "ddgst": ${ddgst:-false} 00:34:16.115 }, 00:34:16.115 "method": "bdev_nvme_attach_controller" 00:34:16.115 } 00:34:16.115 EOF 00:34:16.115 )") 00:34:16.115 10:30:49 -- target/dif.sh@54 -- # local file 00:34:16.115 10:30:49 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:16.115 10:30:49 -- target/dif.sh@56 -- # cat 00:34:16.115 10:30:49 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:16.115 10:30:49 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:16.115 10:30:49 -- common/autotest_common.sh@1320 -- # shift 00:34:16.115 10:30:49 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:16.115 10:30:49 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:16.115 10:30:49 -- nvmf/common.sh@542 -- # cat 00:34:16.115 10:30:49 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:16.115 10:30:49 -- target/dif.sh@72 -- # (( file <= files )) 00:34:16.115 10:30:49 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:16.115 10:30:49 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:16.115 10:30:49 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:16.115 10:30:49 -- nvmf/common.sh@544 -- # jq . 00:34:16.115 10:30:49 -- nvmf/common.sh@545 -- # IFS=, 00:34:16.115 10:30:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:16.115 "params": { 00:34:16.115 "name": "Nvme0", 00:34:16.115 "trtype": "tcp", 00:34:16.115 "traddr": "10.0.0.2", 00:34:16.115 "adrfam": "ipv4", 00:34:16.115 "trsvcid": "4420", 00:34:16.115 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:16.115 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:16.115 "hdgst": false, 00:34:16.115 "ddgst": false 00:34:16.115 }, 00:34:16.115 "method": "bdev_nvme_attach_controller" 00:34:16.115 }' 00:34:16.115 10:30:49 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:16.115 10:30:49 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:16.115 10:30:49 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:16.115 10:30:49 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:16.115 10:30:49 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:16.115 10:30:49 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:16.115 10:30:49 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:16.115 10:30:49 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:16.115 10:30:49 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:16.115 10:30:49 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:16.373 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:16.373 fio-3.35 00:34:16.373 Starting 1 thread 00:34:16.373 EAL: No free 2048 kB hugepages reported on node 1 00:34:16.940 [2024-04-17 10:30:50.120290] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:16.940 [2024-04-17 10:30:50.120345] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:29.149 00:34:29.149 filename0: (groupid=0, jobs=1): err= 0: pid=3673023: Wed Apr 17 10:31:00 2024 00:34:29.149 read: IOPS=97, BW=389KiB/s (398kB/s)(3904KiB/10035msec) 00:34:29.149 slat (nsec): min=9027, max=30118, avg=9381.69, stdev=1221.33 00:34:29.149 clat (usec): min=40811, max=42986, avg=41097.12, stdev=342.22 00:34:29.149 lat (usec): min=40820, max=43013, avg=41106.50, stdev=342.31 00:34:29.149 clat percentiles (usec): 00:34:29.149 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:29.149 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:29.149 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:34:29.149 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:34:29.149 | 99.99th=[42730] 00:34:29.149 bw ( KiB/s): min= 384, max= 416, per=99.73%, avg=388.80, stdev=11.72, samples=20 00:34:29.149 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:29.149 lat (msec) : 50=100.00% 00:34:29.149 cpu : usr=95.60%, sys=4.09%, ctx=8, majf=0, minf=237 00:34:29.149 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.149 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.149 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.149 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:29.149 00:34:29.149 Run status group 0 (all jobs): 00:34:29.149 READ: bw=389KiB/s (398kB/s), 389KiB/s-389KiB/s (398kB/s-398kB/s), io=3904KiB (3998kB), run=10035-10035msec 00:34:29.149 10:31:00 -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:29.149 10:31:00 -- target/dif.sh@43 -- # local sub 00:34:29.149 10:31:00 -- target/dif.sh@45 -- # for sub in "$@" 00:34:29.149 10:31:00 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:29.149 10:31:00 -- target/dif.sh@36 -- # local sub_id=0 00:34:29.149 10:31:00 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:29.149 10:31:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:29.149 10:31:00 -- common/autotest_common.sh@10 -- # set +x 00:34:29.149 10:31:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:29.150 10:31:00 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:29.150 10:31:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:29.150 10:31:00 -- common/autotest_common.sh@10 -- # set +x 00:34:29.150 10:31:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:29.150 00:34:29.150 real 0m11.261s 00:34:29.150 user 0m21.503s 00:34:29.150 sys 0m0.705s 00:34:29.150 10:31:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:29.150 10:31:00 -- common/autotest_common.sh@10 -- # set +x 00:34:29.150 ************************************ 00:34:29.150 END TEST fio_dif_1_default 00:34:29.150 ************************************ 00:34:29.150 10:31:00 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:29.150 10:31:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:29.150 10:31:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:29.150 10:31:00 -- common/autotest_common.sh@10 -- # set +x 00:34:29.150 ************************************ 00:34:29.150 START TEST fio_dif_1_multi_subsystems 00:34:29.150 ************************************ 00:34:29.150 10:31:00 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:34:29.150 10:31:00 -- target/dif.sh@92 -- # local files=1 00:34:29.150 10:31:00 -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:29.150 10:31:00 -- target/dif.sh@28 -- # local sub 00:34:29.150 10:31:00 -- target/dif.sh@30 -- # for sub in "$@" 00:34:29.150 10:31:00 -- target/dif.sh@31 -- # create_subsystem 0 00:34:29.150 10:31:00 -- target/dif.sh@18 -- # local sub_id=0 00:34:29.150 10:31:00 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:29.150 10:31:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:29.150 10:31:00 -- common/autotest_common.sh@10 -- # set +x 00:34:29.150 bdev_null0 00:34:29.150 10:31:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:29.150 10:31:00 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:29.150 10:31:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:29.150 10:31:00 -- common/autotest_common.sh@10 -- # set +x 00:34:29.150 10:31:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:29.150 10:31:00 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:29.150 10:31:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:29.150 10:31:00 -- common/autotest_common.sh@10 -- # set +x 00:34:29.150 10:31:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:29.150 10:31:00 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:29.150 10:31:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:29.150 10:31:00 -- common/autotest_common.sh@10 -- # set +x 00:34:29.150 [2024-04-17 10:31:00.540997] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:29.150 10:31:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:29.150 10:31:00 -- target/dif.sh@30 -- # for sub in "$@" 00:34:29.150 10:31:00 -- target/dif.sh@31 -- # create_subsystem 1 00:34:29.150 10:31:00 -- target/dif.sh@18 -- # local sub_id=1 00:34:29.150 10:31:00 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:29.150 10:31:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:29.150 10:31:00 -- common/autotest_common.sh@10 -- # set +x 00:34:29.150 bdev_null1 00:34:29.150 10:31:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:29.150 10:31:00 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:29.150 10:31:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:29.150 10:31:00 -- common/autotest_common.sh@10 -- # set +x 00:34:29.150 10:31:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:29.150 10:31:00 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:29.150 10:31:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:29.150 10:31:00 -- common/autotest_common.sh@10 -- # set +x 00:34:29.150 10:31:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:29.150 10:31:00 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:29.150 10:31:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:29.150 10:31:00 -- common/autotest_common.sh@10 -- # set +x 00:34:29.150 10:31:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:29.150 10:31:00 -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:29.150 10:31:00 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:29.150 10:31:00 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:29.150 10:31:00 -- nvmf/common.sh@520 -- # config=() 00:34:29.150 10:31:00 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.150 10:31:00 -- nvmf/common.sh@520 -- # local subsystem config 00:34:29.150 10:31:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:29.150 10:31:00 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.150 10:31:00 -- target/dif.sh@82 -- # gen_fio_conf 00:34:29.150 10:31:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:29.150 { 00:34:29.150 "params": { 00:34:29.150 "name": "Nvme$subsystem", 00:34:29.150 "trtype": "$TEST_TRANSPORT", 00:34:29.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:29.150 "adrfam": "ipv4", 00:34:29.150 "trsvcid": "$NVMF_PORT", 00:34:29.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:29.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:29.150 "hdgst": ${hdgst:-false}, 00:34:29.150 "ddgst": ${ddgst:-false} 00:34:29.150 }, 00:34:29.150 "method": "bdev_nvme_attach_controller" 00:34:29.150 } 00:34:29.150 EOF 00:34:29.150 )") 00:34:29.150 10:31:00 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:29.150 10:31:00 -- target/dif.sh@54 -- # local file 00:34:29.150 10:31:00 -- target/dif.sh@56 -- # cat 00:34:29.150 10:31:00 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:29.150 10:31:00 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:29.150 10:31:00 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.150 10:31:00 -- common/autotest_common.sh@1320 -- # shift 00:34:29.150 10:31:00 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:29.150 10:31:00 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:29.150 10:31:00 -- nvmf/common.sh@542 -- # cat 00:34:29.150 10:31:00 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:29.150 10:31:00 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.150 10:31:00 -- target/dif.sh@72 -- # (( file <= files )) 00:34:29.150 10:31:00 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:29.150 10:31:00 -- target/dif.sh@73 -- # cat 00:34:29.150 10:31:00 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:29.150 10:31:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:29.150 10:31:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:29.150 { 00:34:29.150 "params": { 00:34:29.150 "name": "Nvme$subsystem", 00:34:29.150 "trtype": "$TEST_TRANSPORT", 00:34:29.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:29.150 "adrfam": "ipv4", 00:34:29.150 "trsvcid": "$NVMF_PORT", 00:34:29.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:29.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:29.150 "hdgst": ${hdgst:-false}, 00:34:29.150 "ddgst": ${ddgst:-false} 00:34:29.150 }, 00:34:29.150 "method": "bdev_nvme_attach_controller" 00:34:29.150 } 00:34:29.150 EOF 00:34:29.150 )") 00:34:29.150 10:31:00 -- target/dif.sh@72 -- # (( file++ )) 00:34:29.150 10:31:00 -- target/dif.sh@72 -- # (( file <= files )) 00:34:29.150 10:31:00 -- nvmf/common.sh@542 -- # cat 00:34:29.150 10:31:00 -- nvmf/common.sh@544 -- # jq . 00:34:29.150 10:31:00 -- nvmf/common.sh@545 -- # IFS=, 00:34:29.150 10:31:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:29.150 "params": { 00:34:29.150 "name": "Nvme0", 00:34:29.150 "trtype": "tcp", 00:34:29.150 "traddr": "10.0.0.2", 00:34:29.150 "adrfam": "ipv4", 00:34:29.150 "trsvcid": "4420", 00:34:29.150 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:29.150 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:29.150 "hdgst": false, 00:34:29.150 "ddgst": false 00:34:29.150 }, 00:34:29.150 "method": "bdev_nvme_attach_controller" 00:34:29.150 },{ 00:34:29.150 "params": { 00:34:29.150 "name": "Nvme1", 00:34:29.150 "trtype": "tcp", 00:34:29.150 "traddr": "10.0.0.2", 00:34:29.150 "adrfam": "ipv4", 00:34:29.150 "trsvcid": "4420", 00:34:29.150 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:29.150 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:29.150 "hdgst": false, 00:34:29.150 "ddgst": false 00:34:29.150 }, 00:34:29.150 "method": "bdev_nvme_attach_controller" 00:34:29.150 }' 00:34:29.150 10:31:00 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:29.150 10:31:00 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:29.150 10:31:00 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:29.150 10:31:00 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:29.150 10:31:00 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.150 10:31:00 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:29.150 10:31:00 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:29.150 10:31:00 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:29.150 10:31:00 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:29.150 10:31:00 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.150 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:29.150 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:29.150 fio-3.35 00:34:29.150 Starting 2 threads 00:34:29.150 EAL: No free 2048 kB hugepages reported on node 1 00:34:29.150 [2024-04-17 10:31:01.601261] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:29.150 [2024-04-17 10:31:01.601316] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:39.124 00:34:39.124 filename0: (groupid=0, jobs=1): err= 0: pid=3675287: Wed Apr 17 10:31:11 2024 00:34:39.124 read: IOPS=96, BW=386KiB/s (396kB/s)(3872KiB/10025msec) 00:34:39.124 slat (nsec): min=5325, max=22013, avg=10933.55, stdev=2750.08 00:34:39.124 clat (usec): min=40793, max=48507, avg=41391.88, stdev=683.57 00:34:39.125 lat (usec): min=40802, max=48522, avg=41402.81, stdev=683.50 00:34:39.125 clat percentiles (usec): 00:34:39.125 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:39.125 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:39.125 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:39.125 | 99.00th=[42730], 99.50th=[42730], 99.90th=[48497], 99.95th=[48497], 00:34:39.125 | 99.99th=[48497] 00:34:39.125 bw ( KiB/s): min= 352, max= 416, per=33.64%, avg=385.60, stdev=12.61, samples=20 00:34:39.125 iops : min= 88, max= 104, avg=96.40, stdev= 3.15, samples=20 00:34:39.125 lat (msec) : 50=100.00% 00:34:39.125 cpu : usr=97.73%, sys=2.00%, ctx=13, majf=0, minf=177 00:34:39.125 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:39.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:39.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:39.125 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:39.125 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:39.125 filename1: (groupid=0, jobs=1): err= 0: pid=3675288: Wed Apr 17 10:31:11 2024 00:34:39.125 read: IOPS=189, BW=758KiB/s (776kB/s)(7600KiB/10025msec) 00:34:39.125 slat (nsec): min=2485, max=45861, avg=6607.20, stdev=2464.29 00:34:39.125 clat (usec): min=683, max=49325, avg=21084.68, stdev=20324.17 00:34:39.125 lat (usec): min=688, max=49340, avg=21091.28, stdev=20323.54 00:34:39.125 clat percentiles (usec): 00:34:39.125 | 1.00th=[ 685], 5.00th=[ 693], 10.00th=[ 701], 20.00th=[ 709], 00:34:39.125 | 30.00th=[ 717], 40.00th=[ 758], 50.00th=[40633], 60.00th=[41157], 00:34:39.125 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:39.125 | 99.00th=[42206], 99.50th=[42206], 99.90th=[49546], 99.95th=[49546], 00:34:39.125 | 99.99th=[49546] 00:34:39.125 bw ( KiB/s): min= 704, max= 768, per=66.24%, avg=758.40, stdev=23.45, samples=20 00:34:39.125 iops : min= 176, max= 192, avg=189.60, stdev= 5.86, samples=20 00:34:39.125 lat (usec) : 750=39.79%, 1000=9.47% 00:34:39.125 lat (msec) : 2=0.63%, 50=50.11% 00:34:39.125 cpu : usr=98.22%, sys=1.53%, ctx=15, majf=0, minf=124 00:34:39.125 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:39.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:39.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:39.125 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:39.125 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:39.125 00:34:39.125 Run status group 0 (all jobs): 00:34:39.125 READ: bw=1144KiB/s (1172kB/s), 386KiB/s-758KiB/s (396kB/s-776kB/s), io=11.2MiB (11.7MB), run=10025-10025msec 00:34:39.125 10:31:11 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:39.125 10:31:11 -- target/dif.sh@43 -- # local sub 00:34:39.125 10:31:11 -- target/dif.sh@45 -- # for sub in "$@" 00:34:39.125 10:31:11 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:39.125 10:31:11 -- target/dif.sh@36 -- # local sub_id=0 00:34:39.125 10:31:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:39.125 10:31:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:39.125 10:31:11 -- common/autotest_common.sh@10 -- # set +x 00:34:39.125 10:31:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:39.125 10:31:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:39.125 10:31:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:39.125 10:31:11 -- common/autotest_common.sh@10 -- # set +x 00:34:39.125 10:31:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:39.125 10:31:11 -- target/dif.sh@45 -- # for sub in "$@" 00:34:39.125 10:31:11 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:39.125 10:31:11 -- target/dif.sh@36 -- # local sub_id=1 00:34:39.125 10:31:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:39.125 10:31:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:39.125 10:31:11 -- common/autotest_common.sh@10 -- # set +x 00:34:39.125 10:31:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:39.125 10:31:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:39.125 10:31:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:39.125 10:31:11 -- common/autotest_common.sh@10 -- # set +x 00:34:39.125 10:31:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:39.125 00:34:39.125 real 0m11.475s 00:34:39.125 user 0m31.975s 00:34:39.125 sys 0m0.716s 00:34:39.125 10:31:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:39.125 10:31:11 -- common/autotest_common.sh@10 -- # set +x 00:34:39.125 ************************************ 00:34:39.125 END TEST fio_dif_1_multi_subsystems 00:34:39.125 ************************************ 00:34:39.125 10:31:12 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:39.125 10:31:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:39.125 10:31:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:39.125 10:31:12 -- common/autotest_common.sh@10 -- # set +x 00:34:39.125 ************************************ 00:34:39.125 START TEST fio_dif_rand_params 00:34:39.125 ************************************ 00:34:39.125 10:31:12 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:34:39.125 10:31:12 -- target/dif.sh@100 -- # local NULL_DIF 00:34:39.125 10:31:12 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:39.125 10:31:12 -- target/dif.sh@103 -- # NULL_DIF=3 00:34:39.125 10:31:12 -- target/dif.sh@103 -- # bs=128k 00:34:39.125 10:31:12 -- target/dif.sh@103 -- # numjobs=3 00:34:39.125 10:31:12 -- target/dif.sh@103 -- # iodepth=3 00:34:39.125 10:31:12 -- target/dif.sh@103 -- # runtime=5 00:34:39.125 10:31:12 -- target/dif.sh@105 -- # create_subsystems 0 00:34:39.125 10:31:12 -- target/dif.sh@28 -- # local sub 00:34:39.125 10:31:12 -- target/dif.sh@30 -- # for sub in "$@" 00:34:39.125 10:31:12 -- target/dif.sh@31 -- # create_subsystem 0 00:34:39.125 10:31:12 -- target/dif.sh@18 -- # local sub_id=0 00:34:39.125 10:31:12 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:39.125 10:31:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:39.125 10:31:12 -- common/autotest_common.sh@10 -- # set +x 00:34:39.125 bdev_null0 00:34:39.125 10:31:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:39.125 10:31:12 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:39.125 10:31:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:39.125 10:31:12 -- common/autotest_common.sh@10 -- # set +x 00:34:39.125 10:31:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:39.125 10:31:12 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:39.125 10:31:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:39.125 10:31:12 -- common/autotest_common.sh@10 -- # set +x 00:34:39.125 10:31:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:39.125 10:31:12 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:39.125 10:31:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:39.125 10:31:12 -- common/autotest_common.sh@10 -- # set +x 00:34:39.125 [2024-04-17 10:31:12.054846] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:39.125 10:31:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:39.125 10:31:12 -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:39.125 10:31:12 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:39.125 10:31:12 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:39.125 10:31:12 -- nvmf/common.sh@520 -- # config=() 00:34:39.125 10:31:12 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:39.125 10:31:12 -- nvmf/common.sh@520 -- # local subsystem config 00:34:39.125 10:31:12 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:39.125 10:31:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:39.125 10:31:12 -- target/dif.sh@82 -- # gen_fio_conf 00:34:39.125 10:31:12 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:39.125 10:31:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:39.125 { 00:34:39.125 "params": { 00:34:39.125 "name": "Nvme$subsystem", 00:34:39.125 "trtype": "$TEST_TRANSPORT", 00:34:39.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:39.125 "adrfam": "ipv4", 00:34:39.125 "trsvcid": "$NVMF_PORT", 00:34:39.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:39.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:39.125 "hdgst": ${hdgst:-false}, 00:34:39.125 "ddgst": ${ddgst:-false} 00:34:39.125 }, 00:34:39.125 "method": "bdev_nvme_attach_controller" 00:34:39.125 } 00:34:39.125 EOF 00:34:39.125 )") 00:34:39.125 10:31:12 -- target/dif.sh@54 -- # local file 00:34:39.125 10:31:12 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:39.125 10:31:12 -- target/dif.sh@56 -- # cat 00:34:39.125 10:31:12 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:39.125 10:31:12 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:39.125 10:31:12 -- common/autotest_common.sh@1320 -- # shift 00:34:39.125 10:31:12 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:39.125 10:31:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:39.125 10:31:12 -- nvmf/common.sh@542 -- # cat 00:34:39.125 10:31:12 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:39.125 10:31:12 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:39.125 10:31:12 -- target/dif.sh@72 -- # (( file <= files )) 00:34:39.125 10:31:12 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:39.125 10:31:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:39.125 10:31:12 -- nvmf/common.sh@544 -- # jq . 00:34:39.125 10:31:12 -- nvmf/common.sh@545 -- # IFS=, 00:34:39.125 10:31:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:39.125 "params": { 00:34:39.125 "name": "Nvme0", 00:34:39.125 "trtype": "tcp", 00:34:39.125 "traddr": "10.0.0.2", 00:34:39.125 "adrfam": "ipv4", 00:34:39.125 "trsvcid": "4420", 00:34:39.125 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:39.125 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:39.125 "hdgst": false, 00:34:39.125 "ddgst": false 00:34:39.125 }, 00:34:39.126 "method": "bdev_nvme_attach_controller" 00:34:39.126 }' 00:34:39.126 10:31:12 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:39.126 10:31:12 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:39.126 10:31:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:39.126 10:31:12 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:39.126 10:31:12 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:39.126 10:31:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:39.126 10:31:12 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:39.126 10:31:12 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:39.126 10:31:12 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:39.126 10:31:12 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:39.126 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:39.126 ... 00:34:39.126 fio-3.35 00:34:39.126 Starting 3 threads 00:34:39.384 EAL: No free 2048 kB hugepages reported on node 1 00:34:39.656 [2024-04-17 10:31:12.971262] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:39.656 [2024-04-17 10:31:12.971317] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:44.974 00:34:44.974 filename0: (groupid=0, jobs=1): err= 0: pid=3677297: Wed Apr 17 10:31:18 2024 00:34:44.974 read: IOPS=203, BW=25.5MiB/s (26.7MB/s)(128MiB/5006msec) 00:34:44.974 slat (nsec): min=9275, max=27363, avg=13817.02, stdev=2977.13 00:34:44.974 clat (usec): min=5406, max=56974, avg=14700.38, stdev=11696.73 00:34:44.974 lat (usec): min=5419, max=56990, avg=14714.20, stdev=11696.83 00:34:44.974 clat percentiles (usec): 00:34:44.974 | 1.00th=[ 5735], 5.00th=[ 5997], 10.00th=[ 6718], 20.00th=[ 8848], 00:34:44.974 | 30.00th=[ 9634], 40.00th=[10683], 50.00th=[11731], 60.00th=[12518], 00:34:44.974 | 70.00th=[13566], 80.00th=[14746], 90.00th=[17695], 95.00th=[50594], 00:34:44.974 | 99.00th=[55313], 99.50th=[56361], 99.90th=[56886], 99.95th=[56886], 00:34:44.974 | 99.99th=[56886] 00:34:44.974 bw ( KiB/s): min=16384, max=33280, per=34.28%, avg=26060.80, stdev=5608.42, samples=10 00:34:44.974 iops : min= 128, max= 260, avg=203.60, stdev=43.82, samples=10 00:34:44.974 lat (msec) : 10=34.02%, 20=57.25%, 50=2.84%, 100=5.88% 00:34:44.974 cpu : usr=94.95%, sys=4.72%, ctx=7, majf=0, minf=115 00:34:44.974 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:44.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.974 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.974 issued rwts: total=1020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:44.974 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:44.974 filename0: (groupid=0, jobs=1): err= 0: pid=3677298: Wed Apr 17 10:31:18 2024 00:34:44.974 read: IOPS=186, BW=23.3MiB/s (24.5MB/s)(118MiB/5049msec) 00:34:44.974 slat (nsec): min=9297, max=23722, avg=13672.34, stdev=2962.68 00:34:44.974 clat (usec): min=5467, max=56874, avg=15993.65, stdev=13591.96 00:34:44.974 lat (usec): min=5477, max=56891, avg=16007.32, stdev=13591.96 00:34:44.974 clat percentiles (usec): 00:34:44.974 | 1.00th=[ 5800], 5.00th=[ 6325], 10.00th=[ 7504], 20.00th=[ 8979], 00:34:44.974 | 30.00th=[ 9634], 40.00th=[10683], 50.00th=[11731], 60.00th=[12518], 00:34:44.974 | 70.00th=[13435], 80.00th=[14353], 90.00th=[49546], 95.00th=[52167], 00:34:44.974 | 99.00th=[54264], 99.50th=[56361], 99.90th=[56886], 99.95th=[56886], 00:34:44.974 | 99.99th=[56886] 00:34:44.974 bw ( KiB/s): min=17152, max=33536, per=31.65%, avg=24064.00, stdev=5505.16, samples=10 00:34:44.974 iops : min= 134, max= 262, avg=188.00, stdev=43.01, samples=10 00:34:44.974 lat (msec) : 10=33.62%, 20=54.08%, 50=3.29%, 100=9.01% 00:34:44.974 cpu : usr=95.80%, sys=3.88%, ctx=9, majf=0, minf=123 00:34:44.974 IO depths : 1=1.5%, 2=98.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:44.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.974 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.974 issued rwts: total=943,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:44.974 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:44.974 filename0: (groupid=0, jobs=1): err= 0: pid=3677299: Wed Apr 17 10:31:18 2024 00:34:44.974 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(130MiB/5047msec) 00:34:44.974 slat (nsec): min=9264, max=24784, avg=13510.49, stdev=3035.50 00:34:44.974 clat (usec): min=5359, max=54531, avg=14554.87, stdev=11581.48 00:34:44.974 lat (usec): min=5369, max=54549, avg=14568.38, stdev=11581.68 00:34:44.974 clat percentiles (usec): 00:34:44.974 | 1.00th=[ 5735], 5.00th=[ 5997], 10.00th=[ 6652], 20.00th=[ 8586], 00:34:44.974 | 30.00th=[ 9372], 40.00th=[10159], 50.00th=[11469], 60.00th=[12649], 00:34:44.974 | 70.00th=[13566], 80.00th=[15139], 90.00th=[17957], 95.00th=[50594], 00:34:44.974 | 99.00th=[53216], 99.50th=[53740], 99.90th=[54264], 99.95th=[54789], 00:34:44.974 | 99.99th=[54789] 00:34:44.974 bw ( KiB/s): min=14080, max=31807, per=34.79%, avg=26451.10, stdev=5162.73, samples=10 00:34:44.974 iops : min= 110, max= 248, avg=206.60, stdev=40.28, samples=10 00:34:44.974 lat (msec) : 10=38.32%, 20=53.09%, 50=2.90%, 100=5.69% 00:34:44.974 cpu : usr=95.84%, sys=3.84%, ctx=9, majf=0, minf=38 00:34:44.974 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:44.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.974 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.974 issued rwts: total=1036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:44.974 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:44.974 00:34:44.974 Run status group 0 (all jobs): 00:34:44.974 READ: bw=74.2MiB/s (77.9MB/s), 23.3MiB/s-25.7MiB/s (24.5MB/s-26.9MB/s), io=375MiB (393MB), run=5006-5049msec 00:34:45.234 10:31:18 -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:45.234 10:31:18 -- target/dif.sh@43 -- # local sub 00:34:45.235 10:31:18 -- target/dif.sh@45 -- # for sub in "$@" 00:34:45.235 10:31:18 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:45.235 10:31:18 -- target/dif.sh@36 -- # local sub_id=0 00:34:45.235 10:31:18 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:45.235 10:31:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:45.235 10:31:18 -- common/autotest_common.sh@10 -- # set +x 00:34:45.235 10:31:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:45.235 10:31:18 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:45.235 10:31:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:45.235 10:31:18 -- common/autotest_common.sh@10 -- # set +x 00:34:45.235 10:31:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:45.235 10:31:18 -- target/dif.sh@109 -- # NULL_DIF=2 00:34:45.235 10:31:18 -- target/dif.sh@109 -- # bs=4k 00:34:45.235 10:31:18 -- target/dif.sh@109 -- # numjobs=8 00:34:45.235 10:31:18 -- target/dif.sh@109 -- # iodepth=16 00:34:45.235 10:31:18 -- target/dif.sh@109 -- # runtime= 00:34:45.235 10:31:18 -- target/dif.sh@109 -- # files=2 00:34:45.235 10:31:18 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:45.235 10:31:18 -- target/dif.sh@28 -- # local sub 00:34:45.235 10:31:18 -- target/dif.sh@30 -- # for sub in "$@" 00:34:45.235 10:31:18 -- target/dif.sh@31 -- # create_subsystem 0 00:34:45.235 10:31:18 -- target/dif.sh@18 -- # local sub_id=0 00:34:45.235 10:31:18 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:45.235 10:31:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:45.235 10:31:18 -- common/autotest_common.sh@10 -- # set +x 00:34:45.235 bdev_null0 00:34:45.235 10:31:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:45.235 10:31:18 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:45.235 10:31:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:45.235 10:31:18 -- common/autotest_common.sh@10 -- # set +x 00:34:45.235 10:31:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:45.235 10:31:18 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:45.235 10:31:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:45.235 10:31:18 -- common/autotest_common.sh@10 -- # set +x 00:34:45.235 10:31:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:45.235 10:31:18 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:45.235 10:31:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:45.235 10:31:18 -- common/autotest_common.sh@10 -- # set +x 00:34:45.235 [2024-04-17 10:31:18.357337] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:45.235 10:31:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:45.235 10:31:18 -- target/dif.sh@30 -- # for sub in "$@" 00:34:45.235 10:31:18 -- target/dif.sh@31 -- # create_subsystem 1 00:34:45.235 10:31:18 -- target/dif.sh@18 -- # local sub_id=1 00:34:45.235 10:31:18 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:45.235 10:31:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:45.235 10:31:18 -- common/autotest_common.sh@10 -- # set +x 00:34:45.235 bdev_null1 00:34:45.235 10:31:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:45.235 10:31:18 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:45.235 10:31:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:45.235 10:31:18 -- common/autotest_common.sh@10 -- # set +x 00:34:45.235 10:31:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:45.235 10:31:18 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:45.235 10:31:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:45.235 10:31:18 -- common/autotest_common.sh@10 -- # set +x 00:34:45.235 10:31:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:45.235 10:31:18 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:45.235 10:31:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:45.235 10:31:18 -- common/autotest_common.sh@10 -- # set +x 00:34:45.235 10:31:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:45.235 10:31:18 -- target/dif.sh@30 -- # for sub in "$@" 00:34:45.235 10:31:18 -- target/dif.sh@31 -- # create_subsystem 2 00:34:45.235 10:31:18 -- target/dif.sh@18 -- # local sub_id=2 00:34:45.235 10:31:18 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:45.235 10:31:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:45.235 10:31:18 -- common/autotest_common.sh@10 -- # set +x 00:34:45.235 bdev_null2 00:34:45.235 10:31:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:45.235 10:31:18 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:45.235 10:31:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:45.235 10:31:18 -- common/autotest_common.sh@10 -- # set +x 00:34:45.235 10:31:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:45.235 10:31:18 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:45.235 10:31:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:45.235 10:31:18 -- common/autotest_common.sh@10 -- # set +x 00:34:45.235 10:31:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:45.235 10:31:18 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:45.235 10:31:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:45.235 10:31:18 -- common/autotest_common.sh@10 -- # set +x 00:34:45.235 10:31:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:45.235 10:31:18 -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:45.235 10:31:18 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:45.235 10:31:18 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:45.235 10:31:18 -- nvmf/common.sh@520 -- # config=() 00:34:45.235 10:31:18 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:45.235 10:31:18 -- nvmf/common.sh@520 -- # local subsystem config 00:34:45.235 10:31:18 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:45.235 10:31:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:45.235 10:31:18 -- target/dif.sh@82 -- # gen_fio_conf 00:34:45.235 10:31:18 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:45.235 10:31:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:45.235 { 00:34:45.235 "params": { 00:34:45.235 "name": "Nvme$subsystem", 00:34:45.235 "trtype": "$TEST_TRANSPORT", 00:34:45.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:45.235 "adrfam": "ipv4", 00:34:45.235 "trsvcid": "$NVMF_PORT", 00:34:45.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:45.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:45.235 "hdgst": ${hdgst:-false}, 00:34:45.235 "ddgst": ${ddgst:-false} 00:34:45.235 }, 00:34:45.235 "method": "bdev_nvme_attach_controller" 00:34:45.235 } 00:34:45.235 EOF 00:34:45.235 )") 00:34:45.235 10:31:18 -- target/dif.sh@54 -- # local file 00:34:45.235 10:31:18 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:45.235 10:31:18 -- target/dif.sh@56 -- # cat 00:34:45.235 10:31:18 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:45.235 10:31:18 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:45.235 10:31:18 -- common/autotest_common.sh@1320 -- # shift 00:34:45.235 10:31:18 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:45.235 10:31:18 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:45.235 10:31:18 -- nvmf/common.sh@542 -- # cat 00:34:45.235 10:31:18 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:45.235 10:31:18 -- target/dif.sh@72 -- # (( file <= files )) 00:34:45.235 10:31:18 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:45.235 10:31:18 -- target/dif.sh@73 -- # cat 00:34:45.235 10:31:18 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:45.235 10:31:18 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:45.235 10:31:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:45.235 10:31:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:45.235 { 00:34:45.235 "params": { 00:34:45.235 "name": "Nvme$subsystem", 00:34:45.235 "trtype": "$TEST_TRANSPORT", 00:34:45.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:45.235 "adrfam": "ipv4", 00:34:45.235 "trsvcid": "$NVMF_PORT", 00:34:45.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:45.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:45.235 "hdgst": ${hdgst:-false}, 00:34:45.235 "ddgst": ${ddgst:-false} 00:34:45.235 }, 00:34:45.235 "method": "bdev_nvme_attach_controller" 00:34:45.235 } 00:34:45.235 EOF 00:34:45.235 )") 00:34:45.235 10:31:18 -- target/dif.sh@72 -- # (( file++ )) 00:34:45.235 10:31:18 -- target/dif.sh@72 -- # (( file <= files )) 00:34:45.235 10:31:18 -- target/dif.sh@73 -- # cat 00:34:45.235 10:31:18 -- nvmf/common.sh@542 -- # cat 00:34:45.235 10:31:18 -- target/dif.sh@72 -- # (( file++ )) 00:34:45.235 10:31:18 -- target/dif.sh@72 -- # (( file <= files )) 00:34:45.235 10:31:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:45.235 10:31:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:45.235 { 00:34:45.235 "params": { 00:34:45.235 "name": "Nvme$subsystem", 00:34:45.235 "trtype": "$TEST_TRANSPORT", 00:34:45.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:45.235 "adrfam": "ipv4", 00:34:45.235 "trsvcid": "$NVMF_PORT", 00:34:45.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:45.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:45.235 "hdgst": ${hdgst:-false}, 00:34:45.235 "ddgst": ${ddgst:-false} 00:34:45.235 }, 00:34:45.235 "method": "bdev_nvme_attach_controller" 00:34:45.235 } 00:34:45.235 EOF 00:34:45.235 )") 00:34:45.235 10:31:18 -- nvmf/common.sh@542 -- # cat 00:34:45.235 10:31:18 -- nvmf/common.sh@544 -- # jq . 00:34:45.235 10:31:18 -- nvmf/common.sh@545 -- # IFS=, 00:34:45.235 10:31:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:45.235 "params": { 00:34:45.235 "name": "Nvme0", 00:34:45.235 "trtype": "tcp", 00:34:45.235 "traddr": "10.0.0.2", 00:34:45.235 "adrfam": "ipv4", 00:34:45.235 "trsvcid": "4420", 00:34:45.235 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:45.236 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:45.236 "hdgst": false, 00:34:45.236 "ddgst": false 00:34:45.236 }, 00:34:45.236 "method": "bdev_nvme_attach_controller" 00:34:45.236 },{ 00:34:45.236 "params": { 00:34:45.236 "name": "Nvme1", 00:34:45.236 "trtype": "tcp", 00:34:45.236 "traddr": "10.0.0.2", 00:34:45.236 "adrfam": "ipv4", 00:34:45.236 "trsvcid": "4420", 00:34:45.236 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:45.236 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:45.236 "hdgst": false, 00:34:45.236 "ddgst": false 00:34:45.236 }, 00:34:45.236 "method": "bdev_nvme_attach_controller" 00:34:45.236 },{ 00:34:45.236 "params": { 00:34:45.236 "name": "Nvme2", 00:34:45.236 "trtype": "tcp", 00:34:45.236 "traddr": "10.0.0.2", 00:34:45.236 "adrfam": "ipv4", 00:34:45.236 "trsvcid": "4420", 00:34:45.236 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:45.236 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:45.236 "hdgst": false, 00:34:45.236 "ddgst": false 00:34:45.236 }, 00:34:45.236 "method": "bdev_nvme_attach_controller" 00:34:45.236 }' 00:34:45.236 10:31:18 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:45.236 10:31:18 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:45.236 10:31:18 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:45.236 10:31:18 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:45.236 10:31:18 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:45.236 10:31:18 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:45.236 10:31:18 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:45.236 10:31:18 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:45.236 10:31:18 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:45.236 10:31:18 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:45.801 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:45.801 ... 00:34:45.801 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:45.801 ... 00:34:45.801 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:45.801 ... 00:34:45.801 fio-3.35 00:34:45.801 Starting 24 threads 00:34:45.801 EAL: No free 2048 kB hugepages reported on node 1 00:34:46.369 [2024-04-17 10:31:19.520093] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:46.369 [2024-04-17 10:31:19.520153] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:58.573 00:34:58.573 filename0: (groupid=0, jobs=1): err= 0: pid=3678643: Wed Apr 17 10:31:29 2024 00:34:58.573 read: IOPS=446, BW=1787KiB/s (1830kB/s)(17.5MiB/10004msec) 00:34:58.573 slat (usec): min=9, max=111, avg=49.19, stdev=23.17 00:34:58.573 clat (usec): min=7484, max=48897, avg=35430.79, stdev=2529.70 00:34:58.573 lat (usec): min=7498, max=48911, avg=35479.97, stdev=2530.45 00:34:58.573 clat percentiles (usec): 00:34:58.573 | 1.00th=[33162], 5.00th=[34866], 10.00th=[34866], 20.00th=[35390], 00:34:58.573 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35914], 00:34:58.573 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:34:58.573 | 99.00th=[39584], 99.50th=[41157], 99.90th=[49021], 99.95th=[49021], 00:34:58.573 | 99.99th=[49021] 00:34:58.573 bw ( KiB/s): min= 1664, max= 1843, per=4.18%, avg=1786.05, stdev=32.02, samples=19 00:34:58.573 iops : min= 416, max= 460, avg=446.47, stdev= 7.93, samples=19 00:34:58.573 lat (msec) : 10=0.49%, 20=0.36%, 50=99.15% 00:34:58.573 cpu : usr=98.67%, sys=0.72%, ctx=52, majf=0, minf=22 00:34:58.573 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:58.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.573 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.573 issued rwts: total=4470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.573 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.573 filename0: (groupid=0, jobs=1): err= 0: pid=3678644: Wed Apr 17 10:31:29 2024 00:34:58.573 read: IOPS=444, BW=1778KiB/s (1821kB/s)(17.4MiB/10006msec) 00:34:58.573 slat (nsec): min=7426, max=93363, avg=47930.74, stdev=17452.80 00:34:58.573 clat (usec): min=12632, max=81121, avg=35567.84, stdev=3298.26 00:34:58.573 lat (usec): min=12649, max=81137, avg=35615.77, stdev=3297.35 00:34:58.573 clat percentiles (usec): 00:34:58.573 | 1.00th=[28443], 5.00th=[34866], 10.00th=[34866], 20.00th=[35390], 00:34:58.573 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:34:58.573 | 70.00th=[35914], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:34:58.573 | 99.00th=[39060], 99.50th=[41157], 99.90th=[81265], 99.95th=[81265], 00:34:58.573 | 99.99th=[81265] 00:34:58.573 bw ( KiB/s): min= 1539, max= 1795, per=4.12%, avg=1763.26, stdev=67.61, samples=19 00:34:58.573 iops : min= 384, max= 448, avg=440.74, stdev=17.02, samples=19 00:34:58.573 lat (msec) : 20=0.72%, 50=98.92%, 100=0.36% 00:34:58.573 cpu : usr=97.65%, sys=1.21%, ctx=30, majf=0, minf=38 00:34:58.573 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:58.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.573 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.573 issued rwts: total=4448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.573 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.573 filename0: (groupid=0, jobs=1): err= 0: pid=3678645: Wed Apr 17 10:31:29 2024 00:34:58.574 read: IOPS=444, BW=1777KiB/s (1820kB/s)(17.4MiB/10012msec) 00:34:58.574 slat (usec): min=9, max=120, avg=56.43, stdev=19.16 00:34:58.574 clat (usec): min=27053, max=52211, avg=35506.43, stdev=1275.00 00:34:58.574 lat (usec): min=27087, max=52228, avg=35562.86, stdev=1273.76 00:34:58.574 clat percentiles (usec): 00:34:58.574 | 1.00th=[34341], 5.00th=[34866], 10.00th=[34866], 20.00th=[34866], 00:34:58.574 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:34:58.574 | 70.00th=[35914], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:34:58.574 | 99.00th=[39060], 99.50th=[41157], 99.90th=[52167], 99.95th=[52167], 00:34:58.574 | 99.99th=[52167] 00:34:58.574 bw ( KiB/s): min= 1664, max= 1792, per=4.14%, avg=1770.32, stdev=47.33, samples=19 00:34:58.574 iops : min= 416, max= 448, avg=442.58, stdev=11.83, samples=19 00:34:58.574 lat (msec) : 50=99.64%, 100=0.36% 00:34:58.574 cpu : usr=98.88%, sys=0.71%, ctx=32, majf=0, minf=34 00:34:58.574 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:58.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.574 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.574 issued rwts: total=4448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.574 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.574 filename0: (groupid=0, jobs=1): err= 0: pid=3678646: Wed Apr 17 10:31:29 2024 00:34:58.574 read: IOPS=444, BW=1779KiB/s (1822kB/s)(17.4MiB/10002msec) 00:34:58.574 slat (usec): min=4, max=107, avg=50.16, stdev=18.39 00:34:58.574 clat (usec): min=23416, max=49327, avg=35548.36, stdev=1328.80 00:34:58.574 lat (usec): min=23434, max=49342, avg=35598.52, stdev=1326.89 00:34:58.574 clat percentiles (usec): 00:34:58.574 | 1.00th=[34341], 5.00th=[34866], 10.00th=[34866], 20.00th=[35390], 00:34:58.574 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35914], 00:34:58.574 | 70.00th=[35914], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:34:58.574 | 99.00th=[39060], 99.50th=[41681], 99.90th=[49546], 99.95th=[49546], 00:34:58.574 | 99.99th=[49546] 00:34:58.574 bw ( KiB/s): min= 1664, max= 1792, per=4.16%, avg=1777.05, stdev=39.89, samples=19 00:34:58.574 iops : min= 416, max= 448, avg=444.26, stdev= 9.97, samples=19 00:34:58.574 lat (msec) : 50=100.00% 00:34:58.574 cpu : usr=98.74%, sys=0.70%, ctx=34, majf=0, minf=30 00:34:58.574 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:58.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.574 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.574 issued rwts: total=4448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.574 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.574 filename0: (groupid=0, jobs=1): err= 0: pid=3678648: Wed Apr 17 10:31:29 2024 00:34:58.574 read: IOPS=447, BW=1788KiB/s (1831kB/s)(17.5MiB/10011msec) 00:34:58.574 slat (nsec): min=5930, max=89192, avg=42505.36, stdev=19106.55 00:34:58.574 clat (usec): min=13447, max=74090, avg=35414.70, stdev=3564.90 00:34:58.574 lat (usec): min=13465, max=74106, avg=35457.20, stdev=3565.69 00:34:58.574 clat percentiles (usec): 00:34:58.574 | 1.00th=[22152], 5.00th=[34341], 10.00th=[34866], 20.00th=[35390], 00:34:58.574 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35914], 00:34:58.574 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:34:58.574 | 99.00th=[42206], 99.50th=[56361], 99.90th=[73925], 99.95th=[73925], 00:34:58.574 | 99.99th=[73925] 00:34:58.574 bw ( KiB/s): min= 1536, max= 1916, per=4.17%, avg=1781.68, stdev=76.61, samples=19 00:34:58.574 iops : min= 384, max= 479, avg=445.42, stdev=19.15, samples=19 00:34:58.574 lat (msec) : 20=0.63%, 50=98.79%, 100=0.58% 00:34:58.574 cpu : usr=98.91%, sys=0.72%, ctx=18, majf=0, minf=29 00:34:58.574 IO depths : 1=5.2%, 2=10.9%, 4=23.0%, 8=53.2%, 16=7.6%, 32=0.0%, >=64=0.0% 00:34:58.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.574 complete : 0=0.0%, 4=93.7%, 8=0.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.574 issued rwts: total=4476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.574 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.574 filename0: (groupid=0, jobs=1): err= 0: pid=3678649: Wed Apr 17 10:31:29 2024 00:34:58.574 read: IOPS=446, BW=1785KiB/s (1828kB/s)(17.4MiB/10001msec) 00:34:58.574 slat (usec): min=7, max=103, avg=46.19, stdev=19.22 00:34:58.574 clat (usec): min=8077, max=48008, avg=35463.59, stdev=2350.55 00:34:58.574 lat (usec): min=8094, max=48067, avg=35509.79, stdev=2352.00 00:34:58.574 clat percentiles (usec): 00:34:58.574 | 1.00th=[32900], 5.00th=[34866], 10.00th=[34866], 20.00th=[35390], 00:34:58.574 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35914], 00:34:58.574 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:34:58.574 | 99.00th=[39584], 99.50th=[41157], 99.90th=[47973], 99.95th=[47973], 00:34:58.574 | 99.99th=[47973] 00:34:58.574 bw ( KiB/s): min= 1660, max= 1792, per=4.17%, avg=1783.16, stdev=29.89, samples=19 00:34:58.574 iops : min= 415, max= 448, avg=445.79, stdev= 7.47, samples=19 00:34:58.574 lat (msec) : 10=0.36%, 20=0.36%, 50=99.28% 00:34:58.574 cpu : usr=99.10%, sys=0.43%, ctx=103, majf=0, minf=66 00:34:58.574 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:58.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.574 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.574 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.574 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.574 filename0: (groupid=0, jobs=1): err= 0: pid=3678650: Wed Apr 17 10:31:29 2024 00:34:58.574 read: IOPS=448, BW=1792KiB/s (1835kB/s)(17.5MiB/10008msec) 00:34:58.574 slat (nsec): min=4287, max=93653, avg=36377.73, stdev=22911.59 00:34:58.574 clat (usec): min=7599, max=68461, avg=35416.26, stdev=4274.75 00:34:58.574 lat (usec): min=7607, max=68474, avg=35452.64, stdev=4275.86 00:34:58.574 clat percentiles (usec): 00:34:58.574 | 1.00th=[21627], 5.00th=[29754], 10.00th=[31851], 20.00th=[34866], 00:34:58.574 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35914], 00:34:58.574 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[41157], 00:34:58.574 | 99.00th=[49546], 99.50th=[54789], 99.90th=[68682], 99.95th=[68682], 00:34:58.574 | 99.99th=[68682] 00:34:58.574 bw ( KiB/s): min= 1536, max= 1900, per=4.16%, avg=1778.26, stdev=73.96, samples=19 00:34:58.574 iops : min= 384, max= 475, avg=444.53, stdev=18.48, samples=19 00:34:58.574 lat (msec) : 10=0.36%, 20=0.54%, 50=98.13%, 100=0.98% 00:34:58.574 cpu : usr=99.03%, sys=0.59%, ctx=56, majf=0, minf=53 00:34:58.574 IO depths : 1=3.7%, 2=7.8%, 4=17.1%, 8=61.2%, 16=10.2%, 32=0.0%, >=64=0.0% 00:34:58.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.574 complete : 0=0.0%, 4=92.3%, 8=3.4%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.574 issued rwts: total=4484,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.574 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.574 filename0: (groupid=0, jobs=1): err= 0: pid=3678651: Wed Apr 17 10:31:29 2024 00:34:58.574 read: IOPS=451, BW=1807KiB/s (1850kB/s)(17.7MiB/10005msec) 00:34:58.574 slat (nsec): min=9219, max=95878, avg=35470.08, stdev=21940.83 00:34:58.574 clat (usec): min=19866, max=94952, avg=35145.31, stdev=4433.41 00:34:58.574 lat (usec): min=19876, max=94977, avg=35180.78, stdev=4435.15 00:34:58.574 clat percentiles (usec): 00:34:58.574 | 1.00th=[22676], 5.00th=[27395], 10.00th=[30540], 20.00th=[34866], 00:34:58.574 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35914], 00:34:58.574 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[37487], 00:34:58.574 | 99.00th=[45876], 99.50th=[53216], 99.90th=[81265], 99.95th=[81265], 00:34:58.575 | 99.99th=[94897] 00:34:58.575 bw ( KiB/s): min= 1536, max= 1980, per=4.23%, avg=1807.16, stdev=90.07, samples=19 00:34:58.575 iops : min= 384, max= 495, avg=451.79, stdev=22.52, samples=19 00:34:58.575 lat (msec) : 20=0.09%, 50=99.38%, 100=0.53% 00:34:58.575 cpu : usr=98.69%, sys=0.73%, ctx=43, majf=0, minf=39 00:34:58.575 IO depths : 1=4.0%, 2=8.1%, 4=17.2%, 8=60.9%, 16=9.8%, 32=0.0%, >=64=0.0% 00:34:58.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.575 complete : 0=0.0%, 4=92.3%, 8=3.3%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.575 issued rwts: total=4520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.575 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.575 filename1: (groupid=0, jobs=1): err= 0: pid=3678652: Wed Apr 17 10:31:29 2024 00:34:58.575 read: IOPS=448, BW=1794KiB/s (1837kB/s)(17.5MiB/10001msec) 00:34:58.575 slat (nsec): min=5894, max=95070, avg=28876.65, stdev=21430.57 00:34:58.575 clat (usec): min=19174, max=91110, avg=35461.93, stdev=4072.08 00:34:58.575 lat (usec): min=19186, max=91126, avg=35490.81, stdev=4072.99 00:34:58.575 clat percentiles (usec): 00:34:58.575 | 1.00th=[22414], 5.00th=[31327], 10.00th=[34866], 20.00th=[35390], 00:34:58.575 | 30.00th=[35390], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:34:58.575 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:34:58.575 | 99.00th=[49021], 99.50th=[56886], 99.90th=[77071], 99.95th=[77071], 00:34:58.575 | 99.99th=[90702] 00:34:58.575 bw ( KiB/s): min= 1539, max= 2107, per=4.19%, avg=1792.95, stdev=103.73, samples=19 00:34:58.575 iops : min= 384, max= 526, avg=448.16, stdev=25.91, samples=19 00:34:58.575 lat (msec) : 20=0.22%, 50=99.15%, 100=0.62% 00:34:58.575 cpu : usr=98.74%, sys=0.75%, ctx=34, majf=0, minf=40 00:34:58.575 IO depths : 1=5.4%, 2=11.2%, 4=23.7%, 8=52.5%, 16=7.2%, 32=0.0%, >=64=0.0% 00:34:58.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.575 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.575 issued rwts: total=4486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.575 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.575 filename1: (groupid=0, jobs=1): err= 0: pid=3678653: Wed Apr 17 10:31:29 2024 00:34:58.575 read: IOPS=444, BW=1778KiB/s (1821kB/s)(17.4MiB/10007msec) 00:34:58.575 slat (nsec): min=4331, max=95746, avg=46069.92, stdev=17524.74 00:34:58.575 clat (usec): min=13608, max=70108, avg=35602.39, stdev=2689.15 00:34:58.575 lat (usec): min=13633, max=70122, avg=35648.46, stdev=2687.61 00:34:58.575 clat percentiles (usec): 00:34:58.575 | 1.00th=[33817], 5.00th=[34866], 10.00th=[34866], 20.00th=[35390], 00:34:58.575 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35914], 00:34:58.575 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:34:58.575 | 99.00th=[38011], 99.50th=[42206], 99.90th=[69731], 99.95th=[69731], 00:34:58.575 | 99.99th=[69731] 00:34:58.575 bw ( KiB/s): min= 1539, max= 1792, per=4.14%, avg=1769.84, stdev=62.90, samples=19 00:34:58.575 iops : min= 384, max= 448, avg=442.42, stdev=15.88, samples=19 00:34:58.575 lat (msec) : 20=0.36%, 50=99.28%, 100=0.36% 00:34:58.575 cpu : usr=98.20%, sys=1.01%, ctx=248, majf=0, minf=33 00:34:58.575 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:58.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.575 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.575 issued rwts: total=4448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.575 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.575 filename1: (groupid=0, jobs=1): err= 0: pid=3678654: Wed Apr 17 10:31:29 2024 00:34:58.575 read: IOPS=444, BW=1777KiB/s (1820kB/s)(17.4MiB/10012msec) 00:34:58.575 slat (usec): min=10, max=121, avg=59.37, stdev=20.38 00:34:58.575 clat (usec): min=25621, max=59383, avg=35485.96, stdev=1333.31 00:34:58.575 lat (usec): min=25637, max=59425, avg=35545.34, stdev=1332.18 00:34:58.575 clat percentiles (usec): 00:34:58.575 | 1.00th=[34341], 5.00th=[34866], 10.00th=[34866], 20.00th=[34866], 00:34:58.575 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:34:58.575 | 70.00th=[35914], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:34:58.575 | 99.00th=[39060], 99.50th=[41157], 99.90th=[52167], 99.95th=[52167], 00:34:58.575 | 99.99th=[59507] 00:34:58.575 bw ( KiB/s): min= 1664, max= 1792, per=4.14%, avg=1770.32, stdev=47.33, samples=19 00:34:58.575 iops : min= 416, max= 448, avg=442.58, stdev=11.83, samples=19 00:34:58.575 lat (msec) : 50=99.64%, 100=0.36% 00:34:58.575 cpu : usr=99.03%, sys=0.51%, ctx=29, majf=0, minf=32 00:34:58.575 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:58.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.575 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.575 issued rwts: total=4448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.575 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.575 filename1: (groupid=0, jobs=1): err= 0: pid=3678655: Wed Apr 17 10:31:29 2024 00:34:58.575 read: IOPS=444, BW=1778KiB/s (1821kB/s)(17.4MiB/10006msec) 00:34:58.575 slat (nsec): min=4340, max=93126, avg=48240.76, stdev=17235.88 00:34:58.575 clat (usec): min=12610, max=81164, avg=35561.98, stdev=3301.41 00:34:58.575 lat (usec): min=12629, max=81177, avg=35610.22, stdev=3300.60 00:34:58.575 clat percentiles (usec): 00:34:58.575 | 1.00th=[28443], 5.00th=[34866], 10.00th=[34866], 20.00th=[35390], 00:34:58.575 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:34:58.575 | 70.00th=[35914], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:34:58.575 | 99.00th=[39060], 99.50th=[41681], 99.90th=[81265], 99.95th=[81265], 00:34:58.575 | 99.99th=[81265] 00:34:58.575 bw ( KiB/s): min= 1539, max= 1795, per=4.12%, avg=1763.26, stdev=67.61, samples=19 00:34:58.575 iops : min= 384, max= 448, avg=440.74, stdev=17.02, samples=19 00:34:58.575 lat (msec) : 20=0.72%, 50=98.92%, 100=0.36% 00:34:58.575 cpu : usr=97.48%, sys=1.27%, ctx=90, majf=0, minf=36 00:34:58.575 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:58.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.575 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.575 issued rwts: total=4448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.575 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.575 filename1: (groupid=0, jobs=1): err= 0: pid=3678656: Wed Apr 17 10:31:29 2024 00:34:58.575 read: IOPS=444, BW=1779KiB/s (1822kB/s)(17.4MiB/10002msec) 00:34:58.575 slat (usec): min=9, max=107, avg=48.28, stdev=20.19 00:34:58.575 clat (usec): min=23511, max=49167, avg=35570.95, stdev=1313.21 00:34:58.575 lat (usec): min=23526, max=49186, avg=35619.24, stdev=1310.95 00:34:58.575 clat percentiles (usec): 00:34:58.575 | 1.00th=[34341], 5.00th=[34866], 10.00th=[34866], 20.00th=[35390], 00:34:58.575 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35914], 00:34:58.575 | 70.00th=[35914], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:34:58.575 | 99.00th=[39060], 99.50th=[41681], 99.90th=[49021], 99.95th=[49021], 00:34:58.575 | 99.99th=[49021] 00:34:58.575 bw ( KiB/s): min= 1664, max= 1792, per=4.16%, avg=1777.05, stdev=39.89, samples=19 00:34:58.575 iops : min= 416, max= 448, avg=444.26, stdev= 9.97, samples=19 00:34:58.575 lat (msec) : 50=100.00% 00:34:58.575 cpu : usr=98.85%, sys=0.76%, ctx=23, majf=0, minf=36 00:34:58.575 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:58.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.575 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.575 issued rwts: total=4448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.575 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.575 filename1: (groupid=0, jobs=1): err= 0: pid=3678658: Wed Apr 17 10:31:29 2024 00:34:58.575 read: IOPS=449, BW=1797KiB/s (1840kB/s)(17.6MiB/10010msec) 00:34:58.575 slat (usec): min=9, max=120, avg=54.16, stdev=23.67 00:34:58.575 clat (usec): min=3267, max=48269, avg=35178.40, stdev=3523.67 00:34:58.575 lat (usec): min=3280, max=48353, avg=35232.56, stdev=3526.37 00:34:58.575 clat percentiles (usec): 00:34:58.575 | 1.00th=[ 8160], 5.00th=[34866], 10.00th=[34866], 20.00th=[34866], 00:34:58.575 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:34:58.575 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:34:58.575 | 99.00th=[39584], 99.50th=[41157], 99.90th=[47973], 99.95th=[47973], 00:34:58.575 | 99.99th=[48497] 00:34:58.575 bw ( KiB/s): min= 1660, max= 2048, per=4.20%, avg=1797.47, stdev=67.69, samples=19 00:34:58.575 iops : min= 415, max= 512, avg=449.37, stdev=16.92, samples=19 00:34:58.575 lat (msec) : 4=0.36%, 10=0.82%, 20=0.24%, 50=98.58% 00:34:58.575 cpu : usr=99.05%, sys=0.57%, ctx=18, majf=0, minf=32 00:34:58.575 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:58.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.575 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.575 issued rwts: total=4496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.575 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.575 filename1: (groupid=0, jobs=1): err= 0: pid=3678659: Wed Apr 17 10:31:29 2024 00:34:58.575 read: IOPS=444, BW=1777KiB/s (1820kB/s)(17.4MiB/10012msec) 00:34:58.575 slat (usec): min=11, max=118, avg=57.81, stdev=20.24 00:34:58.575 clat (usec): min=27109, max=52113, avg=35499.59, stdev=1270.69 00:34:58.576 lat (usec): min=27187, max=52134, avg=35557.41, stdev=1269.28 00:34:58.576 clat percentiles (usec): 00:34:58.576 | 1.00th=[34341], 5.00th=[34866], 10.00th=[34866], 20.00th=[34866], 00:34:58.576 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:34:58.576 | 70.00th=[35914], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:34:58.576 | 99.00th=[39060], 99.50th=[41157], 99.90th=[52167], 99.95th=[52167], 00:34:58.576 | 99.99th=[52167] 00:34:58.576 bw ( KiB/s): min= 1664, max= 1792, per=4.14%, avg=1770.32, stdev=47.33, samples=19 00:34:58.576 iops : min= 416, max= 448, avg=442.58, stdev=11.83, samples=19 00:34:58.576 lat (msec) : 50=99.64%, 100=0.36% 00:34:58.576 cpu : usr=99.15%, sys=0.46%, ctx=19, majf=0, minf=34 00:34:58.576 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:58.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.576 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.576 issued rwts: total=4448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.576 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.576 filename1: (groupid=0, jobs=1): err= 0: pid=3678660: Wed Apr 17 10:31:29 2024 00:34:58.576 read: IOPS=446, BW=1785KiB/s (1828kB/s)(17.4MiB/10004msec) 00:34:58.576 slat (usec): min=9, max=108, avg=50.37, stdev=17.78 00:34:58.576 clat (usec): min=7488, max=48094, avg=35446.75, stdev=2267.81 00:34:58.576 lat (usec): min=7499, max=48152, avg=35497.12, stdev=2268.84 00:34:58.576 clat percentiles (usec): 00:34:58.576 | 1.00th=[32900], 5.00th=[34866], 10.00th=[34866], 20.00th=[35390], 00:34:58.576 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:34:58.576 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:34:58.576 | 99.00th=[39584], 99.50th=[41157], 99.90th=[47973], 99.95th=[47973], 00:34:58.576 | 99.99th=[47973] 00:34:58.576 bw ( KiB/s): min= 1664, max= 1795, per=4.17%, avg=1783.53, stdev=29.03, samples=19 00:34:58.576 iops : min= 416, max= 448, avg=445.84, stdev= 7.24, samples=19 00:34:58.576 lat (msec) : 10=0.36%, 20=0.36%, 50=99.28% 00:34:58.576 cpu : usr=99.03%, sys=0.59%, ctx=25, majf=0, minf=43 00:34:58.576 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:58.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.576 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.576 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.576 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.576 filename2: (groupid=0, jobs=1): err= 0: pid=3678661: Wed Apr 17 10:31:29 2024 00:34:58.576 read: IOPS=445, BW=1782KiB/s (1825kB/s)(17.4MiB/10006msec) 00:34:58.576 slat (nsec): min=4454, max=93994, avg=21405.26, stdev=19157.40 00:34:58.576 clat (usec): min=7588, max=67378, avg=35759.46, stdev=4733.93 00:34:58.576 lat (usec): min=7595, max=67391, avg=35780.87, stdev=4735.75 00:34:58.576 clat percentiles (usec): 00:34:58.576 | 1.00th=[22676], 5.00th=[28705], 10.00th=[30802], 20.00th=[35390], 00:34:58.576 | 30.00th=[35390], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:34:58.576 | 70.00th=[35914], 80.00th=[36439], 90.00th=[40109], 95.00th=[43254], 00:34:58.576 | 99.00th=[53740], 99.50th=[54789], 99.90th=[67634], 99.95th=[67634], 00:34:58.576 | 99.99th=[67634] 00:34:58.576 bw ( KiB/s): min= 1539, max= 1808, per=4.14%, avg=1769.16, stdev=59.95, samples=19 00:34:58.576 iops : min= 384, max= 452, avg=442.21, stdev=15.15, samples=19 00:34:58.576 lat (msec) : 10=0.09%, 20=0.85%, 50=97.26%, 100=1.79% 00:34:58.576 cpu : usr=98.95%, sys=0.61%, ctx=33, majf=0, minf=35 00:34:58.576 IO depths : 1=1.4%, 2=3.5%, 4=9.4%, 8=71.7%, 16=14.0%, 32=0.0%, >=64=0.0% 00:34:58.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.576 complete : 0=0.0%, 4=90.6%, 8=6.5%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.576 issued rwts: total=4458,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.576 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.576 filename2: (groupid=0, jobs=1): err= 0: pid=3678662: Wed Apr 17 10:31:29 2024 00:34:58.576 read: IOPS=444, BW=1777KiB/s (1820kB/s)(17.4MiB/10012msec) 00:34:58.576 slat (usec): min=9, max=120, avg=57.40, stdev=19.47 00:34:58.576 clat (usec): min=27221, max=52033, avg=35531.09, stdev=1267.83 00:34:58.576 lat (usec): min=27292, max=52063, avg=35588.49, stdev=1265.11 00:34:58.576 clat percentiles (usec): 00:34:58.576 | 1.00th=[34341], 5.00th=[34866], 10.00th=[34866], 20.00th=[34866], 00:34:58.576 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:34:58.576 | 70.00th=[35914], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:34:58.576 | 99.00th=[39060], 99.50th=[41157], 99.90th=[52167], 99.95th=[52167], 00:34:58.576 | 99.99th=[52167] 00:34:58.576 bw ( KiB/s): min= 1664, max= 1792, per=4.14%, avg=1770.32, stdev=47.33, samples=19 00:34:58.576 iops : min= 416, max= 448, avg=442.58, stdev=11.83, samples=19 00:34:58.576 lat (msec) : 50=99.64%, 100=0.36% 00:34:58.576 cpu : usr=99.17%, sys=0.44%, ctx=17, majf=0, minf=28 00:34:58.576 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:58.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.576 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.576 issued rwts: total=4448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.576 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.576 filename2: (groupid=0, jobs=1): err= 0: pid=3678663: Wed Apr 17 10:31:29 2024 00:34:58.576 read: IOPS=443, BW=1772KiB/s (1815kB/s)(17.3MiB/10002msec) 00:34:58.576 slat (nsec): min=5892, max=89519, avg=43947.53, stdev=18112.97 00:34:58.576 clat (usec): min=21300, max=92629, avg=35744.77, stdev=3087.44 00:34:58.576 lat (usec): min=21317, max=92646, avg=35788.71, stdev=3084.88 00:34:58.576 clat percentiles (usec): 00:34:58.576 | 1.00th=[34341], 5.00th=[34866], 10.00th=[34866], 20.00th=[35390], 00:34:58.576 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35914], 00:34:58.576 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:34:58.576 | 99.00th=[41681], 99.50th=[49021], 99.90th=[79168], 99.95th=[79168], 00:34:58.576 | 99.99th=[92799] 00:34:58.576 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1770.11, stdev=64.10, samples=19 00:34:58.576 iops : min= 384, max= 448, avg=442.53, stdev=16.03, samples=19 00:34:58.576 lat (msec) : 50=99.64%, 100=0.36% 00:34:58.576 cpu : usr=98.75%, sys=0.73%, ctx=77, majf=0, minf=33 00:34:58.576 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:34:58.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.576 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.576 issued rwts: total=4432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.576 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.576 filename2: (groupid=0, jobs=1): err= 0: pid=3678664: Wed Apr 17 10:31:29 2024 00:34:58.576 read: IOPS=444, BW=1779KiB/s (1822kB/s)(17.4MiB/10002msec) 00:34:58.576 slat (nsec): min=9272, max=89722, avg=43851.63, stdev=19336.45 00:34:58.576 clat (usec): min=23441, max=49223, avg=35627.85, stdev=1305.45 00:34:58.576 lat (usec): min=23459, max=49247, avg=35671.70, stdev=1303.47 00:34:58.576 clat percentiles (usec): 00:34:58.576 | 1.00th=[34341], 5.00th=[34866], 10.00th=[34866], 20.00th=[35390], 00:34:58.576 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35914], 00:34:58.576 | 70.00th=[35914], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:34:58.576 | 99.00th=[39060], 99.50th=[41681], 99.90th=[49021], 99.95th=[49021], 00:34:58.576 | 99.99th=[49021] 00:34:58.576 bw ( KiB/s): min= 1664, max= 1792, per=4.16%, avg=1777.05, stdev=39.89, samples=19 00:34:58.576 iops : min= 416, max= 448, avg=444.26, stdev= 9.97, samples=19 00:34:58.576 lat (msec) : 50=100.00% 00:34:58.576 cpu : usr=99.13%, sys=0.50%, ctx=20, majf=0, minf=37 00:34:58.576 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:58.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.576 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.576 issued rwts: total=4448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.576 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.576 filename2: (groupid=0, jobs=1): err= 0: pid=3678665: Wed Apr 17 10:31:29 2024 00:34:58.576 read: IOPS=444, BW=1778KiB/s (1820kB/s)(17.4MiB/10008msec) 00:34:58.576 slat (nsec): min=4327, max=89075, avg=45676.81, stdev=17562.51 00:34:58.576 clat (usec): min=13448, max=84382, avg=35583.68, stdev=2803.57 00:34:58.576 lat (usec): min=13458, max=84395, avg=35629.35, stdev=2802.14 00:34:58.576 clat percentiles (usec): 00:34:58.576 | 1.00th=[33817], 5.00th=[34866], 10.00th=[34866], 20.00th=[35390], 00:34:58.576 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35914], 00:34:58.576 | 70.00th=[35914], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:34:58.576 | 99.00th=[38011], 99.50th=[42206], 99.90th=[70779], 99.95th=[70779], 00:34:58.576 | 99.99th=[84411] 00:34:58.576 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1769.68, stdev=63.52, samples=19 00:34:58.576 iops : min= 384, max= 448, avg=442.42, stdev=15.88, samples=19 00:34:58.576 lat (msec) : 20=0.36%, 50=99.28%, 100=0.36% 00:34:58.576 cpu : usr=97.33%, sys=1.40%, ctx=105, majf=0, minf=36 00:34:58.576 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:58.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.576 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.576 issued rwts: total=4448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.576 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.576 filename2: (groupid=0, jobs=1): err= 0: pid=3678666: Wed Apr 17 10:31:29 2024 00:34:58.576 read: IOPS=447, BW=1791KiB/s (1834kB/s)(17.5MiB/10005msec) 00:34:58.576 slat (nsec): min=6655, max=98595, avg=39011.86, stdev=23427.45 00:34:58.576 clat (usec): min=3507, max=48286, avg=35407.10, stdev=3183.17 00:34:58.576 lat (usec): min=3521, max=48295, avg=35446.11, stdev=3183.62 00:34:58.576 clat percentiles (usec): 00:34:58.576 | 1.00th=[13042], 5.00th=[34866], 10.00th=[34866], 20.00th=[35390], 00:34:58.576 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35914], 60.00th=[35914], 00:34:58.576 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:34:58.576 | 99.00th=[41681], 99.50th=[44827], 99.90th=[45876], 99.95th=[45876], 00:34:58.576 | 99.99th=[48497] 00:34:58.576 bw ( KiB/s): min= 1664, max= 2048, per=4.19%, avg=1790.11, stdev=73.93, samples=19 00:34:58.576 iops : min= 416, max= 512, avg=447.53, stdev=18.48, samples=19 00:34:58.576 lat (msec) : 4=0.20%, 10=0.65%, 20=0.22%, 50=98.93% 00:34:58.576 cpu : usr=97.79%, sys=1.24%, ctx=60, majf=0, minf=64 00:34:58.576 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:58.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.576 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.577 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.577 filename2: (groupid=0, jobs=1): err= 0: pid=3678667: Wed Apr 17 10:31:29 2024 00:34:58.577 read: IOPS=444, BW=1777KiB/s (1820kB/s)(17.4MiB/10012msec) 00:34:58.577 slat (usec): min=10, max=109, avg=52.70, stdev=15.96 00:34:58.577 clat (usec): min=26383, max=67480, avg=35543.07, stdev=1411.25 00:34:58.577 lat (usec): min=26396, max=67501, avg=35595.78, stdev=1410.44 00:34:58.577 clat percentiles (usec): 00:34:58.577 | 1.00th=[34341], 5.00th=[34866], 10.00th=[34866], 20.00th=[34866], 00:34:58.577 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35390], 00:34:58.577 | 70.00th=[35914], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:34:58.577 | 99.00th=[39060], 99.50th=[41157], 99.90th=[52167], 99.95th=[52167], 00:34:58.577 | 99.99th=[67634] 00:34:58.577 bw ( KiB/s): min= 1664, max= 1792, per=4.14%, avg=1770.32, stdev=47.33, samples=19 00:34:58.577 iops : min= 416, max= 448, avg=442.58, stdev=11.83, samples=19 00:34:58.577 lat (msec) : 50=99.64%, 100=0.36% 00:34:58.577 cpu : usr=97.40%, sys=1.43%, ctx=87, majf=0, minf=41 00:34:58.577 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:58.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.577 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.577 issued rwts: total=4448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.577 filename2: (groupid=0, jobs=1): err= 0: pid=3678668: Wed Apr 17 10:31:29 2024 00:34:58.577 read: IOPS=441, BW=1766KiB/s (1809kB/s)(17.3MiB/10006msec) 00:34:58.577 slat (usec): min=4, max=105, avg=46.87, stdev=18.46 00:34:58.577 clat (usec): min=7702, max=81125, avg=35818.37, stdev=3803.12 00:34:58.577 lat (usec): min=7711, max=81139, avg=35865.24, stdev=3802.53 00:34:58.577 clat percentiles (usec): 00:34:58.577 | 1.00th=[28181], 5.00th=[34866], 10.00th=[34866], 20.00th=[35390], 00:34:58.577 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35390], 60.00th=[35914], 00:34:58.577 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:34:58.577 | 99.00th=[46400], 99.50th=[52167], 99.90th=[81265], 99.95th=[81265], 00:34:58.577 | 99.99th=[81265] 00:34:58.577 bw ( KiB/s): min= 1539, max= 1795, per=4.12%, avg=1761.58, stdev=67.36, samples=19 00:34:58.577 iops : min= 384, max= 448, avg=440.32, stdev=16.96, samples=19 00:34:58.577 lat (msec) : 10=0.09%, 20=0.50%, 50=98.46%, 100=0.95% 00:34:58.577 cpu : usr=98.76%, sys=0.82%, ctx=58, majf=0, minf=33 00:34:58.577 IO depths : 1=5.7%, 2=11.6%, 4=23.7%, 8=52.0%, 16=6.9%, 32=0.0%, >=64=0.0% 00:34:58.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.577 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.577 issued rwts: total=4418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:58.577 00:34:58.577 Run status group 0 (all jobs): 00:34:58.577 READ: bw=41.8MiB/s (43.8MB/s), 1766KiB/s-1807KiB/s (1809kB/s-1850kB/s), io=418MiB (438MB), run=10001-10012msec 00:34:58.577 10:31:29 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:58.577 10:31:29 -- target/dif.sh@43 -- # local sub 00:34:58.577 10:31:29 -- target/dif.sh@45 -- # for sub in "$@" 00:34:58.577 10:31:29 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:58.577 10:31:29 -- target/dif.sh@36 -- # local sub_id=0 00:34:58.577 10:31:29 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:58.577 10:31:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:58.577 10:31:29 -- common/autotest_common.sh@10 -- # set +x 00:34:58.577 10:31:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:58.577 10:31:29 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:58.577 10:31:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:58.577 10:31:29 -- common/autotest_common.sh@10 -- # set +x 00:34:58.577 10:31:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:58.577 10:31:29 -- target/dif.sh@45 -- # for sub in "$@" 00:34:58.577 10:31:29 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:58.577 10:31:29 -- target/dif.sh@36 -- # local sub_id=1 00:34:58.577 10:31:29 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:58.577 10:31:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:58.577 10:31:29 -- common/autotest_common.sh@10 -- # set +x 00:34:58.577 10:31:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:58.577 10:31:29 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:58.577 10:31:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:58.577 10:31:29 -- common/autotest_common.sh@10 -- # set +x 00:34:58.577 10:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:58.577 10:31:30 -- target/dif.sh@45 -- # for sub in "$@" 00:34:58.577 10:31:30 -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:58.577 10:31:30 -- target/dif.sh@36 -- # local sub_id=2 00:34:58.577 10:31:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:58.577 10:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:58.577 10:31:30 -- common/autotest_common.sh@10 -- # set +x 00:34:58.577 10:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:58.577 10:31:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:58.577 10:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:58.577 10:31:30 -- common/autotest_common.sh@10 -- # set +x 00:34:58.577 10:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:58.577 10:31:30 -- target/dif.sh@115 -- # NULL_DIF=1 00:34:58.577 10:31:30 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:58.577 10:31:30 -- target/dif.sh@115 -- # numjobs=2 00:34:58.577 10:31:30 -- target/dif.sh@115 -- # iodepth=8 00:34:58.577 10:31:30 -- target/dif.sh@115 -- # runtime=5 00:34:58.577 10:31:30 -- target/dif.sh@115 -- # files=1 00:34:58.577 10:31:30 -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:58.577 10:31:30 -- target/dif.sh@28 -- # local sub 00:34:58.577 10:31:30 -- target/dif.sh@30 -- # for sub in "$@" 00:34:58.577 10:31:30 -- target/dif.sh@31 -- # create_subsystem 0 00:34:58.577 10:31:30 -- target/dif.sh@18 -- # local sub_id=0 00:34:58.577 10:31:30 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:58.577 10:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:58.577 10:31:30 -- common/autotest_common.sh@10 -- # set +x 00:34:58.577 bdev_null0 00:34:58.577 10:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:58.577 10:31:30 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:58.577 10:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:58.577 10:31:30 -- common/autotest_common.sh@10 -- # set +x 00:34:58.577 10:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:58.577 10:31:30 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:58.577 10:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:58.577 10:31:30 -- common/autotest_common.sh@10 -- # set +x 00:34:58.577 10:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:58.577 10:31:30 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:58.577 10:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:58.577 10:31:30 -- common/autotest_common.sh@10 -- # set +x 00:34:58.577 [2024-04-17 10:31:30.054311] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:58.577 10:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:58.577 10:31:30 -- target/dif.sh@30 -- # for sub in "$@" 00:34:58.577 10:31:30 -- target/dif.sh@31 -- # create_subsystem 1 00:34:58.577 10:31:30 -- target/dif.sh@18 -- # local sub_id=1 00:34:58.577 10:31:30 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:58.577 10:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:58.577 10:31:30 -- common/autotest_common.sh@10 -- # set +x 00:34:58.577 bdev_null1 00:34:58.577 10:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:58.577 10:31:30 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:58.577 10:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:58.577 10:31:30 -- common/autotest_common.sh@10 -- # set +x 00:34:58.577 10:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:58.577 10:31:30 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:58.577 10:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:58.577 10:31:30 -- common/autotest_common.sh@10 -- # set +x 00:34:58.577 10:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:58.577 10:31:30 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:58.577 10:31:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:58.577 10:31:30 -- common/autotest_common.sh@10 -- # set +x 00:34:58.577 10:31:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:58.577 10:31:30 -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:58.577 10:31:30 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:58.577 10:31:30 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:58.577 10:31:30 -- nvmf/common.sh@520 -- # config=() 00:34:58.577 10:31:30 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.577 10:31:30 -- nvmf/common.sh@520 -- # local subsystem config 00:34:58.577 10:31:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:58.577 10:31:30 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.577 10:31:30 -- target/dif.sh@82 -- # gen_fio_conf 00:34:58.577 10:31:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:58.577 { 00:34:58.577 "params": { 00:34:58.577 "name": "Nvme$subsystem", 00:34:58.577 "trtype": "$TEST_TRANSPORT", 00:34:58.577 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.577 "adrfam": "ipv4", 00:34:58.577 "trsvcid": "$NVMF_PORT", 00:34:58.577 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.577 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.577 "hdgst": ${hdgst:-false}, 00:34:58.577 "ddgst": ${ddgst:-false} 00:34:58.577 }, 00:34:58.577 "method": "bdev_nvme_attach_controller" 00:34:58.577 } 00:34:58.577 EOF 00:34:58.577 )") 00:34:58.577 10:31:30 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:58.577 10:31:30 -- target/dif.sh@54 -- # local file 00:34:58.577 10:31:30 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:58.577 10:31:30 -- target/dif.sh@56 -- # cat 00:34:58.577 10:31:30 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:58.578 10:31:30 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.578 10:31:30 -- common/autotest_common.sh@1320 -- # shift 00:34:58.578 10:31:30 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:58.578 10:31:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.578 10:31:30 -- nvmf/common.sh@542 -- # cat 00:34:58.578 10:31:30 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:58.578 10:31:30 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.578 10:31:30 -- target/dif.sh@72 -- # (( file <= files )) 00:34:58.578 10:31:30 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:58.578 10:31:30 -- target/dif.sh@73 -- # cat 00:34:58.578 10:31:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:58.578 10:31:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:58.578 10:31:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:58.578 { 00:34:58.578 "params": { 00:34:58.578 "name": "Nvme$subsystem", 00:34:58.578 "trtype": "$TEST_TRANSPORT", 00:34:58.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.578 "adrfam": "ipv4", 00:34:58.578 "trsvcid": "$NVMF_PORT", 00:34:58.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.578 "hdgst": ${hdgst:-false}, 00:34:58.578 "ddgst": ${ddgst:-false} 00:34:58.578 }, 00:34:58.578 "method": "bdev_nvme_attach_controller" 00:34:58.578 } 00:34:58.578 EOF 00:34:58.578 )") 00:34:58.578 10:31:30 -- target/dif.sh@72 -- # (( file++ )) 00:34:58.578 10:31:30 -- target/dif.sh@72 -- # (( file <= files )) 00:34:58.578 10:31:30 -- nvmf/common.sh@542 -- # cat 00:34:58.578 10:31:30 -- nvmf/common.sh@544 -- # jq . 00:34:58.578 10:31:30 -- nvmf/common.sh@545 -- # IFS=, 00:34:58.578 10:31:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:58.578 "params": { 00:34:58.578 "name": "Nvme0", 00:34:58.578 "trtype": "tcp", 00:34:58.578 "traddr": "10.0.0.2", 00:34:58.578 "adrfam": "ipv4", 00:34:58.578 "trsvcid": "4420", 00:34:58.578 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:58.578 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:58.578 "hdgst": false, 00:34:58.578 "ddgst": false 00:34:58.578 }, 00:34:58.578 "method": "bdev_nvme_attach_controller" 00:34:58.578 },{ 00:34:58.578 "params": { 00:34:58.578 "name": "Nvme1", 00:34:58.578 "trtype": "tcp", 00:34:58.578 "traddr": "10.0.0.2", 00:34:58.578 "adrfam": "ipv4", 00:34:58.578 "trsvcid": "4420", 00:34:58.578 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:58.578 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:58.578 "hdgst": false, 00:34:58.578 "ddgst": false 00:34:58.578 }, 00:34:58.578 "method": "bdev_nvme_attach_controller" 00:34:58.578 }' 00:34:58.578 10:31:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:58.578 10:31:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:58.578 10:31:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.578 10:31:30 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.578 10:31:30 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:58.578 10:31:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:58.578 10:31:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:58.578 10:31:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:58.578 10:31:30 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:58.578 10:31:30 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.578 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:58.578 ... 00:34:58.578 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:58.578 ... 00:34:58.578 fio-3.35 00:34:58.578 Starting 4 threads 00:34:58.578 EAL: No free 2048 kB hugepages reported on node 1 00:34:58.578 [2024-04-17 10:31:31.243608] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:58.578 [2024-04-17 10:31:31.243682] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:35:03.852 00:35:03.852 filename0: (groupid=0, jobs=1): err= 0: pid=3680784: Wed Apr 17 10:31:36 2024 00:35:03.852 read: IOPS=1982, BW=15.5MiB/s (16.2MB/s)(77.5MiB/5003msec) 00:35:03.852 slat (nsec): min=5483, max=54997, avg=8234.47, stdev=3215.92 00:35:03.852 clat (usec): min=1272, max=7894, avg=4008.86, stdev=769.95 00:35:03.852 lat (usec): min=1278, max=7906, avg=4017.09, stdev=769.88 00:35:03.852 clat percentiles (usec): 00:35:03.852 | 1.00th=[ 2606], 5.00th=[ 2966], 10.00th=[ 3195], 20.00th=[ 3392], 00:35:03.852 | 30.00th=[ 3621], 40.00th=[ 3752], 50.00th=[ 3949], 60.00th=[ 4080], 00:35:03.852 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4817], 95.00th=[ 5538], 00:35:03.852 | 99.00th=[ 6652], 99.50th=[ 6849], 99.90th=[ 7570], 99.95th=[ 7635], 00:35:03.852 | 99.99th=[ 7898] 00:35:03.852 bw ( KiB/s): min=14464, max=16768, per=27.11%, avg=15863.90, stdev=735.79, samples=10 00:35:03.852 iops : min= 1808, max= 2096, avg=1982.90, stdev=91.92, samples=10 00:35:03.852 lat (msec) : 2=0.09%, 4=53.96%, 10=45.95% 00:35:03.852 cpu : usr=97.36%, sys=2.26%, ctx=16, majf=0, minf=9 00:35:03.852 IO depths : 1=0.1%, 2=8.2%, 4=63.8%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:03.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.852 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.852 issued rwts: total=9918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.852 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:03.852 filename0: (groupid=0, jobs=1): err= 0: pid=3680785: Wed Apr 17 10:31:36 2024 00:35:03.852 read: IOPS=1711, BW=13.4MiB/s (14.0MB/s)(66.9MiB/5001msec) 00:35:03.852 slat (nsec): min=5453, max=32370, avg=8225.87, stdev=3299.80 00:35:03.852 clat (usec): min=1524, max=9755, avg=4650.56, stdev=819.58 00:35:03.852 lat (usec): min=1530, max=9778, avg=4658.79, stdev=819.20 00:35:03.852 clat percentiles (usec): 00:35:03.852 | 1.00th=[ 3228], 5.00th=[ 3785], 10.00th=[ 3949], 20.00th=[ 4113], 00:35:03.852 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4490], 00:35:03.852 | 70.00th=[ 4752], 80.00th=[ 5145], 90.00th=[ 5866], 95.00th=[ 6587], 00:35:03.852 | 99.00th=[ 7242], 99.50th=[ 7373], 99.90th=[ 7898], 99.95th=[ 8029], 00:35:03.852 | 99.99th=[ 9765] 00:35:03.852 bw ( KiB/s): min=13018, max=14208, per=23.40%, avg=13690.00, stdev=364.21, samples=9 00:35:03.852 iops : min= 1627, max= 1776, avg=1711.22, stdev=45.58, samples=9 00:35:03.852 lat (msec) : 2=0.02%, 4=12.76%, 10=87.22% 00:35:03.852 cpu : usr=97.52%, sys=2.16%, ctx=11, majf=0, minf=9 00:35:03.852 IO depths : 1=0.1%, 2=0.8%, 4=69.1%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:03.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.852 complete : 0=0.0%, 4=94.6%, 8=5.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.852 issued rwts: total=8561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.852 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:03.852 filename1: (groupid=0, jobs=1): err= 0: pid=3680786: Wed Apr 17 10:31:36 2024 00:35:03.852 read: IOPS=1786, BW=14.0MiB/s (14.6MB/s)(69.8MiB/5002msec) 00:35:03.852 slat (nsec): min=5466, max=40626, avg=7902.44, stdev=3017.82 00:35:03.852 clat (usec): min=1961, max=10400, avg=4453.25, stdev=752.34 00:35:03.852 lat (usec): min=1970, max=10422, avg=4461.15, stdev=752.17 00:35:03.852 clat percentiles (usec): 00:35:03.852 | 1.00th=[ 2900], 5.00th=[ 3490], 10.00th=[ 3720], 20.00th=[ 3949], 00:35:03.852 | 30.00th=[ 4080], 40.00th=[ 4228], 50.00th=[ 4359], 60.00th=[ 4424], 00:35:03.852 | 70.00th=[ 4555], 80.00th=[ 4883], 90.00th=[ 5407], 95.00th=[ 5997], 00:35:03.852 | 99.00th=[ 6915], 99.50th=[ 7177], 99.90th=[ 7570], 99.95th=[ 8029], 00:35:03.852 | 99.99th=[10421] 00:35:03.852 bw ( KiB/s): min=13824, max=14848, per=24.45%, avg=14307.56, stdev=267.59, samples=9 00:35:03.852 iops : min= 1728, max= 1856, avg=1788.44, stdev=33.45, samples=9 00:35:03.852 lat (msec) : 2=0.01%, 4=23.14%, 10=76.82%, 20=0.03% 00:35:03.852 cpu : usr=97.94%, sys=1.68%, ctx=8, majf=0, minf=9 00:35:03.852 IO depths : 1=0.1%, 2=2.7%, 4=69.0%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:03.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.852 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.852 issued rwts: total=8938,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.852 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:03.852 filename1: (groupid=0, jobs=1): err= 0: pid=3680787: Wed Apr 17 10:31:36 2024 00:35:03.852 read: IOPS=1835, BW=14.3MiB/s (15.0MB/s)(71.7MiB/5001msec) 00:35:03.852 slat (nsec): min=5468, max=57096, avg=8042.28, stdev=3100.15 00:35:03.852 clat (usec): min=1679, max=7518, avg=4336.07, stdev=709.74 00:35:03.852 lat (usec): min=1690, max=7524, avg=4344.12, stdev=709.59 00:35:03.852 clat percentiles (usec): 00:35:03.852 | 1.00th=[ 2835], 5.00th=[ 3294], 10.00th=[ 3589], 20.00th=[ 3851], 00:35:03.852 | 30.00th=[ 4047], 40.00th=[ 4178], 50.00th=[ 4293], 60.00th=[ 4359], 00:35:03.852 | 70.00th=[ 4490], 80.00th=[ 4752], 90.00th=[ 5211], 95.00th=[ 5669], 00:35:03.852 | 99.00th=[ 6652], 99.50th=[ 6915], 99.90th=[ 7308], 99.95th=[ 7439], 00:35:03.852 | 99.99th=[ 7504] 00:35:03.852 bw ( KiB/s): min=14128, max=15232, per=25.09%, avg=14680.56, stdev=385.51, samples=9 00:35:03.852 iops : min= 1766, max= 1904, avg=1835.00, stdev=48.21, samples=9 00:35:03.852 lat (msec) : 2=0.03%, 4=27.29%, 10=72.68% 00:35:03.852 cpu : usr=97.30%, sys=2.34%, ctx=7, majf=0, minf=9 00:35:03.852 IO depths : 1=0.1%, 2=2.0%, 4=70.1%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:03.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.852 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:03.852 issued rwts: total=9177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:03.852 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:03.852 00:35:03.852 Run status group 0 (all jobs): 00:35:03.852 READ: bw=57.1MiB/s (59.9MB/s), 13.4MiB/s-15.5MiB/s (14.0MB/s-16.2MB/s), io=286MiB (300MB), run=5001-5003msec 00:35:03.852 10:31:36 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:03.852 10:31:36 -- target/dif.sh@43 -- # local sub 00:35:03.852 10:31:36 -- target/dif.sh@45 -- # for sub in "$@" 00:35:03.852 10:31:36 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:03.852 10:31:36 -- target/dif.sh@36 -- # local sub_id=0 00:35:03.852 10:31:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:03.852 10:31:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:03.852 10:31:36 -- common/autotest_common.sh@10 -- # set +x 00:35:03.852 10:31:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:03.852 10:31:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:03.852 10:31:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:03.852 10:31:36 -- common/autotest_common.sh@10 -- # set +x 00:35:03.852 10:31:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:03.852 10:31:36 -- target/dif.sh@45 -- # for sub in "$@" 00:35:03.852 10:31:36 -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:03.852 10:31:36 -- target/dif.sh@36 -- # local sub_id=1 00:35:03.852 10:31:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:03.852 10:31:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:03.852 10:31:36 -- common/autotest_common.sh@10 -- # set +x 00:35:03.852 10:31:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:03.852 10:31:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:03.852 10:31:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:03.852 10:31:36 -- common/autotest_common.sh@10 -- # set +x 00:35:03.852 10:31:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:03.852 00:35:03.852 real 0m24.605s 00:35:03.852 user 5m7.004s 00:35:03.852 sys 0m4.001s 00:35:03.852 10:31:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:03.852 10:31:36 -- common/autotest_common.sh@10 -- # set +x 00:35:03.852 ************************************ 00:35:03.852 END TEST fio_dif_rand_params 00:35:03.852 ************************************ 00:35:03.852 10:31:36 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:03.852 10:31:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:03.852 10:31:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:03.852 10:31:36 -- common/autotest_common.sh@10 -- # set +x 00:35:03.852 ************************************ 00:35:03.852 START TEST fio_dif_digest 00:35:03.852 ************************************ 00:35:03.852 10:31:36 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:35:03.852 10:31:36 -- target/dif.sh@123 -- # local NULL_DIF 00:35:03.852 10:31:36 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:03.852 10:31:36 -- target/dif.sh@125 -- # local hdgst ddgst 00:35:03.852 10:31:36 -- target/dif.sh@127 -- # NULL_DIF=3 00:35:03.852 10:31:36 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:03.852 10:31:36 -- target/dif.sh@127 -- # numjobs=3 00:35:03.853 10:31:36 -- target/dif.sh@127 -- # iodepth=3 00:35:03.853 10:31:36 -- target/dif.sh@127 -- # runtime=10 00:35:03.853 10:31:36 -- target/dif.sh@128 -- # hdgst=true 00:35:03.853 10:31:36 -- target/dif.sh@128 -- # ddgst=true 00:35:03.853 10:31:36 -- target/dif.sh@130 -- # create_subsystems 0 00:35:03.853 10:31:36 -- target/dif.sh@28 -- # local sub 00:35:03.853 10:31:36 -- target/dif.sh@30 -- # for sub in "$@" 00:35:03.853 10:31:36 -- target/dif.sh@31 -- # create_subsystem 0 00:35:03.853 10:31:36 -- target/dif.sh@18 -- # local sub_id=0 00:35:03.853 10:31:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:03.853 10:31:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:03.853 10:31:36 -- common/autotest_common.sh@10 -- # set +x 00:35:03.853 bdev_null0 00:35:03.853 10:31:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:03.853 10:31:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:03.853 10:31:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:03.853 10:31:36 -- common/autotest_common.sh@10 -- # set +x 00:35:03.853 10:31:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:03.853 10:31:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:03.853 10:31:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:03.853 10:31:36 -- common/autotest_common.sh@10 -- # set +x 00:35:03.853 10:31:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:03.853 10:31:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:03.853 10:31:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:03.853 10:31:36 -- common/autotest_common.sh@10 -- # set +x 00:35:03.853 [2024-04-17 10:31:36.706587] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:03.853 10:31:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:03.853 10:31:36 -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:03.853 10:31:36 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:03.853 10:31:36 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:03.853 10:31:36 -- nvmf/common.sh@520 -- # config=() 00:35:03.853 10:31:36 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:03.853 10:31:36 -- nvmf/common.sh@520 -- # local subsystem config 00:35:03.853 10:31:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:03.853 10:31:36 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:03.853 10:31:36 -- target/dif.sh@82 -- # gen_fio_conf 00:35:03.853 10:31:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:03.853 { 00:35:03.853 "params": { 00:35:03.853 "name": "Nvme$subsystem", 00:35:03.853 "trtype": "$TEST_TRANSPORT", 00:35:03.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:03.853 "adrfam": "ipv4", 00:35:03.853 "trsvcid": "$NVMF_PORT", 00:35:03.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:03.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:03.853 "hdgst": ${hdgst:-false}, 00:35:03.853 "ddgst": ${ddgst:-false} 00:35:03.853 }, 00:35:03.853 "method": "bdev_nvme_attach_controller" 00:35:03.853 } 00:35:03.853 EOF 00:35:03.853 )") 00:35:03.853 10:31:36 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:35:03.853 10:31:36 -- target/dif.sh@54 -- # local file 00:35:03.853 10:31:36 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:03.853 10:31:36 -- target/dif.sh@56 -- # cat 00:35:03.853 10:31:36 -- common/autotest_common.sh@1318 -- # local sanitizers 00:35:03.853 10:31:36 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:03.853 10:31:36 -- common/autotest_common.sh@1320 -- # shift 00:35:03.853 10:31:36 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:35:03.853 10:31:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:03.853 10:31:36 -- nvmf/common.sh@542 -- # cat 00:35:03.853 10:31:36 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:03.853 10:31:36 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:03.853 10:31:36 -- common/autotest_common.sh@1324 -- # grep libasan 00:35:03.853 10:31:36 -- target/dif.sh@72 -- # (( file <= files )) 00:35:03.853 10:31:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:03.853 10:31:36 -- nvmf/common.sh@544 -- # jq . 00:35:03.853 10:31:36 -- nvmf/common.sh@545 -- # IFS=, 00:35:03.853 10:31:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:35:03.853 "params": { 00:35:03.853 "name": "Nvme0", 00:35:03.853 "trtype": "tcp", 00:35:03.853 "traddr": "10.0.0.2", 00:35:03.853 "adrfam": "ipv4", 00:35:03.853 "trsvcid": "4420", 00:35:03.853 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:03.853 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:03.853 "hdgst": true, 00:35:03.853 "ddgst": true 00:35:03.853 }, 00:35:03.853 "method": "bdev_nvme_attach_controller" 00:35:03.853 }' 00:35:03.853 10:31:36 -- common/autotest_common.sh@1324 -- # asan_lib= 00:35:03.853 10:31:36 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:35:03.853 10:31:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:03.853 10:31:36 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:03.853 10:31:36 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:35:03.853 10:31:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:03.853 10:31:36 -- common/autotest_common.sh@1324 -- # asan_lib= 00:35:03.853 10:31:36 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:35:03.853 10:31:36 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:03.853 10:31:36 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:03.853 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:03.853 ... 00:35:03.853 fio-3.35 00:35:03.853 Starting 3 threads 00:35:03.853 EAL: No free 2048 kB hugepages reported on node 1 00:35:04.420 [2024-04-17 10:31:37.567013] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:35:04.420 [2024-04-17 10:31:37.567065] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:35:16.623 00:35:16.623 filename0: (groupid=0, jobs=1): err= 0: pid=3682026: Wed Apr 17 10:31:47 2024 00:35:16.623 read: IOPS=188, BW=23.6MiB/s (24.7MB/s)(237MiB/10046msec) 00:35:16.623 slat (nsec): min=5479, max=51073, avg=16119.74, stdev=5026.02 00:35:16.623 clat (usec): min=8966, max=59401, avg=15857.04, stdev=3372.15 00:35:16.623 lat (usec): min=8977, max=59412, avg=15873.16, stdev=3372.14 00:35:16.623 clat percentiles (usec): 00:35:16.623 | 1.00th=[10683], 5.00th=[13435], 10.00th=[14222], 20.00th=[14746], 00:35:16.623 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15664], 60.00th=[15926], 00:35:16.623 | 70.00th=[16319], 80.00th=[16712], 90.00th=[17433], 95.00th=[17957], 00:35:16.623 | 99.00th=[19530], 99.50th=[55313], 99.90th=[58983], 99.95th=[59507], 00:35:16.623 | 99.99th=[59507] 00:35:16.623 bw ( KiB/s): min=19968, max=27136, per=33.14%, avg=24217.60, stdev=1378.35, samples=20 00:35:16.623 iops : min= 156, max= 212, avg=189.20, stdev=10.77, samples=20 00:35:16.623 lat (msec) : 10=0.21%, 20=99.00%, 50=0.26%, 100=0.53% 00:35:16.623 cpu : usr=95.61%, sys=4.04%, ctx=29, majf=0, minf=175 00:35:16.623 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:16.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.623 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.623 issued rwts: total=1893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:16.623 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:16.623 filename0: (groupid=0, jobs=1): err= 0: pid=3682027: Wed Apr 17 10:31:47 2024 00:35:16.623 read: IOPS=195, BW=24.5MiB/s (25.7MB/s)(246MiB/10050msec) 00:35:16.623 slat (nsec): min=2792, max=52076, avg=12817.31, stdev=5607.22 00:35:16.623 clat (usec): min=8673, max=59019, avg=15282.00, stdev=3413.14 00:35:16.623 lat (usec): min=8680, max=59030, avg=15294.82, stdev=3413.06 00:35:16.623 clat percentiles (usec): 00:35:16.623 | 1.00th=[10028], 5.00th=[12780], 10.00th=[13566], 20.00th=[14091], 00:35:16.623 | 30.00th=[14484], 40.00th=[14877], 50.00th=[15139], 60.00th=[15401], 00:35:16.624 | 70.00th=[15795], 80.00th=[16188], 90.00th=[16712], 95.00th=[17171], 00:35:16.624 | 99.00th=[18744], 99.50th=[51643], 99.90th=[58983], 99.95th=[58983], 00:35:16.624 | 99.99th=[58983] 00:35:16.624 bw ( KiB/s): min=22784, max=26880, per=34.43%, avg=25164.80, stdev=1041.04, samples=20 00:35:16.624 iops : min= 178, max= 210, avg=196.60, stdev= 8.13, samples=20 00:35:16.624 lat (msec) : 10=1.02%, 20=98.27%, 50=0.20%, 100=0.51% 00:35:16.624 cpu : usr=96.03%, sys=3.65%, ctx=19, majf=0, minf=171 00:35:16.624 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:16.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.624 issued rwts: total=1968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:16.624 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:16.624 filename0: (groupid=0, jobs=1): err= 0: pid=3682028: Wed Apr 17 10:31:47 2024 00:35:16.624 read: IOPS=186, BW=23.4MiB/s (24.5MB/s)(235MiB/10045msec) 00:35:16.624 slat (nsec): min=9448, max=52108, avg=16484.35, stdev=4997.60 00:35:16.624 clat (usec): min=9200, max=59183, avg=15990.42, stdev=3287.90 00:35:16.624 lat (usec): min=9215, max=59198, avg=16006.91, stdev=3288.04 00:35:16.624 clat percentiles (usec): 00:35:16.624 | 1.00th=[10421], 5.00th=[13304], 10.00th=[14222], 20.00th=[14877], 00:35:16.624 | 30.00th=[15270], 40.00th=[15533], 50.00th=[15926], 60.00th=[16188], 00:35:16.624 | 70.00th=[16450], 80.00th=[16909], 90.00th=[17433], 95.00th=[17957], 00:35:16.624 | 99.00th=[19268], 99.50th=[49021], 99.90th=[57410], 99.95th=[58983], 00:35:16.624 | 99.99th=[58983] 00:35:16.624 bw ( KiB/s): min=20992, max=26624, per=32.86%, avg=24012.80, stdev=1239.20, samples=20 00:35:16.624 iops : min= 164, max= 208, avg=187.60, stdev= 9.68, samples=20 00:35:16.624 lat (msec) : 10=0.48%, 20=98.83%, 50=0.21%, 100=0.48% 00:35:16.624 cpu : usr=95.43%, sys=4.23%, ctx=19, majf=0, minf=97 00:35:16.624 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:16.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.624 issued rwts: total=1877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:16.624 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:16.624 00:35:16.624 Run status group 0 (all jobs): 00:35:16.624 READ: bw=71.4MiB/s (74.8MB/s), 23.4MiB/s-24.5MiB/s (24.5MB/s-25.7MB/s), io=717MiB (752MB), run=10045-10050msec 00:35:16.624 10:31:47 -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:16.624 10:31:47 -- target/dif.sh@43 -- # local sub 00:35:16.624 10:31:47 -- target/dif.sh@45 -- # for sub in "$@" 00:35:16.624 10:31:47 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:16.624 10:31:47 -- target/dif.sh@36 -- # local sub_id=0 00:35:16.624 10:31:47 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:16.624 10:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:16.624 10:31:47 -- common/autotest_common.sh@10 -- # set +x 00:35:16.624 10:31:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:16.624 10:31:47 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:16.624 10:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:16.624 10:31:47 -- common/autotest_common.sh@10 -- # set +x 00:35:16.624 10:31:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:16.624 00:35:16.624 real 0m11.294s 00:35:16.624 user 0m40.580s 00:35:16.624 sys 0m1.528s 00:35:16.624 10:31:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:16.624 10:31:47 -- common/autotest_common.sh@10 -- # set +x 00:35:16.624 ************************************ 00:35:16.624 END TEST fio_dif_digest 00:35:16.624 ************************************ 00:35:16.624 10:31:47 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:16.624 10:31:47 -- target/dif.sh@147 -- # nvmftestfini 00:35:16.624 10:31:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:35:16.624 10:31:48 -- nvmf/common.sh@116 -- # sync 00:35:16.624 10:31:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:35:16.624 10:31:48 -- nvmf/common.sh@119 -- # set +e 00:35:16.624 10:31:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:35:16.624 10:31:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:35:16.624 rmmod nvme_tcp 00:35:16.624 rmmod nvme_fabrics 00:35:16.624 rmmod nvme_keyring 00:35:16.624 10:31:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:35:16.624 10:31:48 -- nvmf/common.sh@123 -- # set -e 00:35:16.624 10:31:48 -- nvmf/common.sh@124 -- # return 0 00:35:16.624 10:31:48 -- nvmf/common.sh@477 -- # '[' -n 3672590 ']' 00:35:16.624 10:31:48 -- nvmf/common.sh@478 -- # killprocess 3672590 00:35:16.624 10:31:48 -- common/autotest_common.sh@926 -- # '[' -z 3672590 ']' 00:35:16.624 10:31:48 -- common/autotest_common.sh@930 -- # kill -0 3672590 00:35:16.624 10:31:48 -- common/autotest_common.sh@931 -- # uname 00:35:16.624 10:31:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:16.624 10:31:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3672590 00:35:16.624 10:31:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:16.624 10:31:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:16.624 10:31:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3672590' 00:35:16.624 killing process with pid 3672590 00:35:16.624 10:31:48 -- common/autotest_common.sh@945 -- # kill 3672590 00:35:16.624 10:31:48 -- common/autotest_common.sh@950 -- # wait 3672590 00:35:16.624 10:31:48 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:35:16.624 10:31:48 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:17.561 Waiting for block devices as requested 00:35:17.561 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:35:17.820 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:17.820 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:18.079 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:18.079 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:18.079 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:18.079 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:18.337 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:18.337 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:18.337 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:18.596 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:18.596 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:18.596 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:18.855 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:18.855 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:18.855 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:18.855 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:19.114 10:31:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:35:19.114 10:31:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:35:19.114 10:31:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:19.114 10:31:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:35:19.114 10:31:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.114 10:31:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:19.114 10:31:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.019 10:31:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:35:21.019 00:35:21.019 real 1m14.667s 00:35:21.019 user 7m41.983s 00:35:21.019 sys 0m17.810s 00:35:21.019 10:31:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:21.019 10:31:54 -- common/autotest_common.sh@10 -- # set +x 00:35:21.019 ************************************ 00:35:21.019 END TEST nvmf_dif 00:35:21.019 ************************************ 00:35:21.019 10:31:54 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:21.019 10:31:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:21.019 10:31:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:21.019 10:31:54 -- common/autotest_common.sh@10 -- # set +x 00:35:21.019 ************************************ 00:35:21.019 START TEST nvmf_abort_qd_sizes 00:35:21.019 ************************************ 00:35:21.019 10:31:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:21.277 * Looking for test storage... 00:35:21.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:21.277 10:31:54 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:21.277 10:31:54 -- nvmf/common.sh@7 -- # uname -s 00:35:21.277 10:31:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:21.277 10:31:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:21.277 10:31:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:21.277 10:31:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:21.277 10:31:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:21.277 10:31:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:21.278 10:31:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:21.278 10:31:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:21.278 10:31:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:21.278 10:31:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:21.278 10:31:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:35:21.278 10:31:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:35:21.278 10:31:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:21.278 10:31:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:21.278 10:31:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:21.278 10:31:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:21.278 10:31:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:21.278 10:31:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:21.278 10:31:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:21.278 10:31:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.278 10:31:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.278 10:31:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.278 10:31:54 -- paths/export.sh@5 -- # export PATH 00:35:21.278 10:31:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.278 10:31:54 -- nvmf/common.sh@46 -- # : 0 00:35:21.278 10:31:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:35:21.278 10:31:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:35:21.278 10:31:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:35:21.278 10:31:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:21.278 10:31:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:21.278 10:31:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:35:21.278 10:31:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:35:21.278 10:31:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:35:21.278 10:31:54 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:35:21.278 10:31:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:35:21.278 10:31:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:21.278 10:31:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:35:21.278 10:31:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:35:21.278 10:31:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:35:21.278 10:31:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:21.278 10:31:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:21.278 10:31:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.278 10:31:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:35:21.278 10:31:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:35:21.278 10:31:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:35:21.278 10:31:54 -- common/autotest_common.sh@10 -- # set +x 00:35:26.552 10:31:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:35:26.552 10:31:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:35:26.552 10:31:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:35:26.552 10:31:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:35:26.552 10:31:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:35:26.552 10:31:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:35:26.552 10:31:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:35:26.552 10:31:59 -- nvmf/common.sh@294 -- # net_devs=() 00:35:26.552 10:31:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:35:26.552 10:31:59 -- nvmf/common.sh@295 -- # e810=() 00:35:26.552 10:31:59 -- nvmf/common.sh@295 -- # local -ga e810 00:35:26.552 10:31:59 -- nvmf/common.sh@296 -- # x722=() 00:35:26.552 10:31:59 -- nvmf/common.sh@296 -- # local -ga x722 00:35:26.552 10:31:59 -- nvmf/common.sh@297 -- # mlx=() 00:35:26.552 10:31:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:35:26.552 10:31:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:26.552 10:31:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:26.552 10:31:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:26.552 10:31:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:26.552 10:31:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:26.552 10:31:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:26.552 10:31:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:26.552 10:31:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:26.552 10:31:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:26.552 10:31:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:26.552 10:31:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:26.552 10:31:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:35:26.552 10:31:59 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:35:26.552 10:31:59 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:35:26.552 10:31:59 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:35:26.552 10:31:59 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:35:26.552 10:31:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:35:26.552 10:31:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:35:26.552 10:31:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:26.552 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:26.552 10:31:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:35:26.552 10:31:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:35:26.552 10:31:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:26.552 10:31:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:26.552 10:31:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:35:26.552 10:31:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:35:26.552 10:31:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:26.552 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:26.552 10:31:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:35:26.552 10:31:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:35:26.552 10:31:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:26.552 10:31:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:26.552 10:31:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:35:26.552 10:31:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:35:26.552 10:31:59 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:35:26.552 10:31:59 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:35:26.552 10:31:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:35:26.552 10:31:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.552 10:31:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:35:26.552 10:31:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.552 10:31:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:26.552 Found net devices under 0000:af:00.0: cvl_0_0 00:35:26.552 10:31:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.552 10:31:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:35:26.552 10:31:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.552 10:31:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:35:26.552 10:31:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.552 10:31:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:26.552 Found net devices under 0000:af:00.1: cvl_0_1 00:35:26.552 10:31:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.552 10:31:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:35:26.552 10:31:59 -- nvmf/common.sh@402 -- # is_hw=yes 00:35:26.552 10:31:59 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:35:26.552 10:31:59 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:35:26.552 10:31:59 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:35:26.552 10:31:59 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:26.552 10:31:59 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:26.552 10:31:59 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:26.552 10:31:59 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:35:26.552 10:31:59 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:26.552 10:31:59 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:26.552 10:31:59 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:35:26.552 10:31:59 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:26.552 10:31:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:26.552 10:31:59 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:35:26.552 10:31:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:35:26.552 10:31:59 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:35:26.552 10:31:59 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:26.552 10:31:59 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:26.552 10:31:59 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:26.552 10:31:59 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:35:26.552 10:31:59 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:26.811 10:31:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:26.811 10:31:59 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:26.811 10:31:59 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:35:26.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:26.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:35:26.811 00:35:26.811 --- 10.0.0.2 ping statistics --- 00:35:26.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.811 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:35:26.811 10:31:59 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:26.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:26.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:35:26.811 00:35:26.811 --- 10.0.0.1 ping statistics --- 00:35:26.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.811 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:35:26.811 10:31:59 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:26.811 10:31:59 -- nvmf/common.sh@410 -- # return 0 00:35:26.811 10:31:59 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:35:26.811 10:31:59 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:29.344 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:29.344 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:29.344 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:29.344 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:29.344 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:29.344 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:29.344 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:29.344 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:29.344 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:29.344 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:29.344 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:29.344 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:29.344 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:29.344 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:29.344 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:29.344 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:30.281 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:35:30.281 10:32:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:30.281 10:32:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:35:30.281 10:32:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:35:30.281 10:32:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:30.281 10:32:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:35:30.281 10:32:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:35:30.281 10:32:03 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:35:30.281 10:32:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:35:30.281 10:32:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:30.281 10:32:03 -- common/autotest_common.sh@10 -- # set +x 00:35:30.281 10:32:03 -- nvmf/common.sh@469 -- # nvmfpid=3690355 00:35:30.281 10:32:03 -- nvmf/common.sh@470 -- # waitforlisten 3690355 00:35:30.281 10:32:03 -- common/autotest_common.sh@819 -- # '[' -z 3690355 ']' 00:35:30.281 10:32:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:30.281 10:32:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:30.281 10:32:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:30.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:30.281 10:32:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:30.281 10:32:03 -- common/autotest_common.sh@10 -- # set +x 00:35:30.281 10:32:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:30.281 [2024-04-17 10:32:03.457001] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:35:30.281 [2024-04-17 10:32:03.457056] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:30.281 EAL: No free 2048 kB hugepages reported on node 1 00:35:30.281 [2024-04-17 10:32:03.541404] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:30.540 [2024-04-17 10:32:03.633942] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:35:30.540 [2024-04-17 10:32:03.634082] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:30.540 [2024-04-17 10:32:03.634093] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:30.540 [2024-04-17 10:32:03.634103] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:30.540 [2024-04-17 10:32:03.634153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:30.540 [2024-04-17 10:32:03.634253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:30.540 [2024-04-17 10:32:03.634343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:30.540 [2024-04-17 10:32:03.634343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:31.111 10:32:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:31.111 10:32:04 -- common/autotest_common.sh@852 -- # return 0 00:35:31.111 10:32:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:35:31.111 10:32:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:31.111 10:32:04 -- common/autotest_common.sh@10 -- # set +x 00:35:31.111 10:32:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:31.111 10:32:04 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:31.111 10:32:04 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:35:31.111 10:32:04 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:35:31.111 10:32:04 -- scripts/common.sh@311 -- # local bdf bdfs 00:35:31.111 10:32:04 -- scripts/common.sh@312 -- # local nvmes 00:35:31.111 10:32:04 -- scripts/common.sh@314 -- # [[ -n 0000:86:00.0 ]] 00:35:31.111 10:32:04 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:31.111 10:32:04 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:35:31.111 10:32:04 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:86:00.0 ]] 00:35:31.111 10:32:04 -- scripts/common.sh@322 -- # uname -s 00:35:31.111 10:32:04 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:35:31.111 10:32:04 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:35:31.111 10:32:04 -- scripts/common.sh@327 -- # (( 1 )) 00:35:31.111 10:32:04 -- scripts/common.sh@328 -- # printf '%s\n' 0000:86:00.0 00:35:31.111 10:32:04 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:35:31.111 10:32:04 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:86:00.0 00:35:31.111 10:32:04 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:35:31.111 10:32:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:31.111 10:32:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:31.112 10:32:04 -- common/autotest_common.sh@10 -- # set +x 00:35:31.112 ************************************ 00:35:31.112 START TEST spdk_target_abort 00:35:31.112 ************************************ 00:35:31.112 10:32:04 -- common/autotest_common.sh@1104 -- # spdk_target 00:35:31.372 10:32:04 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:31.372 10:32:04 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:35:31.372 10:32:04 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:86:00.0 -b spdk_target 00:35:31.372 10:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:31.372 10:32:04 -- common/autotest_common.sh@10 -- # set +x 00:35:34.660 spdk_targetn1 00:35:34.660 10:32:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:34.660 10:32:07 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:34.660 10:32:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:34.660 10:32:07 -- common/autotest_common.sh@10 -- # set +x 00:35:34.660 [2024-04-17 10:32:07.302732] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:34.660 10:32:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:34.660 10:32:07 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:35:34.661 10:32:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:34.661 10:32:07 -- common/autotest_common.sh@10 -- # set +x 00:35:34.661 10:32:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:34.661 10:32:07 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:35:34.661 10:32:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:34.661 10:32:07 -- common/autotest_common.sh@10 -- # set +x 00:35:34.661 10:32:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:34.661 10:32:07 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:35:34.661 10:32:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:34.661 10:32:07 -- common/autotest_common.sh@10 -- # set +x 00:35:34.661 [2024-04-17 10:32:07.339026] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:34.661 10:32:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:34.661 10:32:07 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:35:34.661 10:32:07 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:34.661 10:32:07 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:34.661 10:32:07 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:34.661 10:32:07 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:34.661 10:32:07 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:35:34.661 10:32:07 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:34.661 10:32:07 -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:34.661 10:32:07 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:34.661 10:32:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:34.661 10:32:07 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:34.661 10:32:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:34.661 10:32:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:34.661 10:32:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:34.661 10:32:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:34.661 10:32:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:34.661 10:32:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:34.661 10:32:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:34.661 10:32:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:34.661 10:32:07 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:34.661 10:32:07 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:34.661 EAL: No free 2048 kB hugepages reported on node 1 00:35:37.195 Initializing NVMe Controllers 00:35:37.195 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:37.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:37.195 Initialization complete. Launching workers. 00:35:37.195 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 14793, failed: 0 00:35:37.195 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1400, failed to submit 13393 00:35:37.195 success 858, unsuccess 542, failed 0 00:35:37.195 10:32:10 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:37.195 10:32:10 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:37.195 EAL: No free 2048 kB hugepages reported on node 1 00:35:41.449 Initializing NVMe Controllers 00:35:41.449 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:41.449 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:41.449 Initialization complete. Launching workers. 00:35:41.449 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8515, failed: 0 00:35:41.449 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1228, failed to submit 7287 00:35:41.449 success 338, unsuccess 890, failed 0 00:35:41.449 10:32:13 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:41.449 10:32:13 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:41.449 EAL: No free 2048 kB hugepages reported on node 1 00:35:43.981 Initializing NVMe Controllers 00:35:43.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:43.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:43.981 Initialization complete. Launching workers. 00:35:43.981 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 39003, failed: 0 00:35:43.981 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2658, failed to submit 36345 00:35:43.981 success 593, unsuccess 2065, failed 0 00:35:43.981 10:32:17 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:35:43.981 10:32:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:43.981 10:32:17 -- common/autotest_common.sh@10 -- # set +x 00:35:43.981 10:32:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:43.981 10:32:17 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:43.981 10:32:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:43.981 10:32:17 -- common/autotest_common.sh@10 -- # set +x 00:35:45.358 10:32:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:45.358 10:32:18 -- target/abort_qd_sizes.sh@62 -- # killprocess 3690355 00:35:45.358 10:32:18 -- common/autotest_common.sh@926 -- # '[' -z 3690355 ']' 00:35:45.358 10:32:18 -- common/autotest_common.sh@930 -- # kill -0 3690355 00:35:45.358 10:32:18 -- common/autotest_common.sh@931 -- # uname 00:35:45.358 10:32:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:45.358 10:32:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3690355 00:35:45.358 10:32:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:45.358 10:32:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:45.358 10:32:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3690355' 00:35:45.358 killing process with pid 3690355 00:35:45.358 10:32:18 -- common/autotest_common.sh@945 -- # kill 3690355 00:35:45.358 10:32:18 -- common/autotest_common.sh@950 -- # wait 3690355 00:35:45.617 00:35:45.617 real 0m14.369s 00:35:45.617 user 0m57.570s 00:35:45.617 sys 0m2.122s 00:35:45.617 10:32:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:45.617 10:32:18 -- common/autotest_common.sh@10 -- # set +x 00:35:45.617 ************************************ 00:35:45.617 END TEST spdk_target_abort 00:35:45.617 ************************************ 00:35:45.617 10:32:18 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:35:45.617 10:32:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:45.617 10:32:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:45.617 10:32:18 -- common/autotest_common.sh@10 -- # set +x 00:35:45.617 ************************************ 00:35:45.617 START TEST kernel_target_abort 00:35:45.617 ************************************ 00:35:45.617 10:32:18 -- common/autotest_common.sh@1104 -- # kernel_target 00:35:45.617 10:32:18 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:35:45.617 10:32:18 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:35:45.617 10:32:18 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:35:45.617 10:32:18 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:35:45.617 10:32:18 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:35:45.617 10:32:18 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:35:45.617 10:32:18 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:45.617 10:32:18 -- nvmf/common.sh@627 -- # local block nvme 00:35:45.617 10:32:18 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:35:45.617 10:32:18 -- nvmf/common.sh@630 -- # modprobe nvmet 00:35:45.617 10:32:18 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:45.617 10:32:18 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:48.906 Waiting for block devices as requested 00:35:48.906 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:35:48.906 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:48.906 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:48.906 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:48.906 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:48.906 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:48.906 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:48.906 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:49.165 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:49.165 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:49.165 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:49.165 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:49.425 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:49.425 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:49.425 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:49.684 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:49.684 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:49.684 10:32:22 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:35:49.684 10:32:22 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:49.684 10:32:22 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:35:49.684 10:32:22 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:35:49.684 10:32:22 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:49.684 No valid GPT data, bailing 00:35:49.943 10:32:23 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:49.943 10:32:23 -- scripts/common.sh@393 -- # pt= 00:35:49.943 10:32:23 -- scripts/common.sh@394 -- # return 1 00:35:49.943 10:32:23 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:35:49.943 10:32:23 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:35:49.943 10:32:23 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:35:49.943 10:32:23 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:35:49.943 10:32:23 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:49.943 10:32:23 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:35:49.943 10:32:23 -- nvmf/common.sh@654 -- # echo 1 00:35:49.943 10:32:23 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:35:49.943 10:32:23 -- nvmf/common.sh@656 -- # echo 1 00:35:49.943 10:32:23 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:35:49.943 10:32:23 -- nvmf/common.sh@663 -- # echo tcp 00:35:49.943 10:32:23 -- nvmf/common.sh@664 -- # echo 4420 00:35:49.943 10:32:23 -- nvmf/common.sh@665 -- # echo ipv4 00:35:49.943 10:32:23 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:49.943 10:32:23 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:35:49.943 00:35:49.943 Discovery Log Number of Records 2, Generation counter 2 00:35:49.943 =====Discovery Log Entry 0====== 00:35:49.943 trtype: tcp 00:35:49.943 adrfam: ipv4 00:35:49.943 subtype: current discovery subsystem 00:35:49.943 treq: not specified, sq flow control disable supported 00:35:49.943 portid: 1 00:35:49.943 trsvcid: 4420 00:35:49.943 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:49.943 traddr: 10.0.0.1 00:35:49.943 eflags: none 00:35:49.943 sectype: none 00:35:49.943 =====Discovery Log Entry 1====== 00:35:49.943 trtype: tcp 00:35:49.943 adrfam: ipv4 00:35:49.943 subtype: nvme subsystem 00:35:49.943 treq: not specified, sq flow control disable supported 00:35:49.943 portid: 1 00:35:49.943 trsvcid: 4420 00:35:49.943 subnqn: kernel_target 00:35:49.943 traddr: 10.0.0.1 00:35:49.943 eflags: none 00:35:49.943 sectype: none 00:35:49.943 10:32:23 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:35:49.943 10:32:23 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:49.943 10:32:23 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:49.943 10:32:23 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:49.943 10:32:23 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:49.943 10:32:23 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:35:49.943 10:32:23 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:49.943 10:32:23 -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:49.943 10:32:23 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:49.943 10:32:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:49.943 10:32:23 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:49.943 10:32:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:49.943 10:32:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:49.943 10:32:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:49.943 10:32:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:49.943 10:32:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:49.943 10:32:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:49.943 10:32:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:49.943 10:32:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:49.943 10:32:23 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:49.943 10:32:23 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:49.943 EAL: No free 2048 kB hugepages reported on node 1 00:35:53.234 Initializing NVMe Controllers 00:35:53.234 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:35:53.234 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:35:53.234 Initialization complete. Launching workers. 00:35:53.234 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 49649, failed: 0 00:35:53.234 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 49649, failed to submit 0 00:35:53.234 success 0, unsuccess 49649, failed 0 00:35:53.234 10:32:26 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:53.234 10:32:26 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:53.234 EAL: No free 2048 kB hugepages reported on node 1 00:35:56.524 Initializing NVMe Controllers 00:35:56.524 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:35:56.524 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:35:56.524 Initialization complete. Launching workers. 00:35:56.524 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 84246, failed: 0 00:35:56.524 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 21242, failed to submit 63004 00:35:56.524 success 0, unsuccess 21242, failed 0 00:35:56.524 10:32:29 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:56.524 10:32:29 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:56.524 EAL: No free 2048 kB hugepages reported on node 1 00:35:59.815 Initializing NVMe Controllers 00:35:59.815 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:35:59.815 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:35:59.815 Initialization complete. Launching workers. 00:35:59.815 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 80680, failed: 0 00:35:59.815 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 20146, failed to submit 60534 00:35:59.815 success 0, unsuccess 20146, failed 0 00:35:59.815 10:32:32 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:35:59.815 10:32:32 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:35:59.815 10:32:32 -- nvmf/common.sh@677 -- # echo 0 00:35:59.815 10:32:32 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:35:59.815 10:32:32 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:35:59.815 10:32:32 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:59.815 10:32:32 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:35:59.815 10:32:32 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:35:59.815 10:32:32 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:35:59.815 00:35:59.815 real 0m13.627s 00:35:59.815 user 0m7.102s 00:35:59.815 sys 0m3.207s 00:35:59.815 10:32:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:59.815 10:32:32 -- common/autotest_common.sh@10 -- # set +x 00:35:59.815 ************************************ 00:35:59.815 END TEST kernel_target_abort 00:35:59.815 ************************************ 00:35:59.815 10:32:32 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:35:59.815 10:32:32 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:35:59.815 10:32:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:35:59.815 10:32:32 -- nvmf/common.sh@116 -- # sync 00:35:59.815 10:32:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:35:59.815 10:32:32 -- nvmf/common.sh@119 -- # set +e 00:35:59.815 10:32:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:35:59.815 10:32:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:35:59.815 rmmod nvme_tcp 00:35:59.815 rmmod nvme_fabrics 00:35:59.815 rmmod nvme_keyring 00:35:59.815 10:32:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:35:59.815 10:32:32 -- nvmf/common.sh@123 -- # set -e 00:35:59.815 10:32:32 -- nvmf/common.sh@124 -- # return 0 00:35:59.815 10:32:32 -- nvmf/common.sh@477 -- # '[' -n 3690355 ']' 00:35:59.815 10:32:32 -- nvmf/common.sh@478 -- # killprocess 3690355 00:35:59.815 10:32:32 -- common/autotest_common.sh@926 -- # '[' -z 3690355 ']' 00:35:59.815 10:32:32 -- common/autotest_common.sh@930 -- # kill -0 3690355 00:35:59.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3690355) - No such process 00:35:59.815 10:32:32 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3690355 is not found' 00:35:59.815 Process with pid 3690355 is not found 00:35:59.815 10:32:32 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:35:59.815 10:32:32 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:02.349 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:36:02.349 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:36:02.349 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:36:02.349 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:36:02.349 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:36:02.349 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:36:02.349 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:36:02.349 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:36:02.349 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:36:02.349 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:36:02.349 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:36:02.349 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:36:02.349 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:36:02.349 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:36:02.349 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:36:02.349 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:36:02.349 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:36:02.349 10:32:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:36:02.349 10:32:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:36:02.349 10:32:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:02.349 10:32:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:36:02.349 10:32:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:02.349 10:32:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:02.349 10:32:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:04.881 10:32:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:36:04.881 00:36:04.881 real 0m43.414s 00:36:04.881 user 1m8.614s 00:36:04.881 sys 0m13.377s 00:36:04.881 10:32:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:04.881 10:32:37 -- common/autotest_common.sh@10 -- # set +x 00:36:04.881 ************************************ 00:36:04.881 END TEST nvmf_abort_qd_sizes 00:36:04.881 ************************************ 00:36:04.881 10:32:37 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:04.881 10:32:37 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:04.881 10:32:37 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:04.881 10:32:37 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:36:04.881 10:32:37 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:04.881 10:32:37 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:04.881 10:32:37 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:04.881 10:32:37 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:04.881 10:32:37 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:04.881 10:32:37 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:04.881 10:32:37 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:04.881 10:32:37 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:04.881 10:32:37 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:04.881 10:32:37 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:04.881 10:32:37 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:36:04.881 10:32:37 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:36:04.881 10:32:37 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:36:04.881 10:32:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:36:04.881 10:32:37 -- common/autotest_common.sh@10 -- # set +x 00:36:04.881 10:32:37 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:36:04.881 10:32:37 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:36:04.881 10:32:37 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:36:04.881 10:32:37 -- common/autotest_common.sh@10 -- # set +x 00:36:10.157 INFO: APP EXITING 00:36:10.157 INFO: killing all VMs 00:36:10.157 INFO: killing vhost app 00:36:10.157 WARN: no vhost pid file found 00:36:10.157 INFO: EXIT DONE 00:36:12.692 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:36:12.692 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:36:12.693 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:36:12.693 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:36:12.693 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:36:12.693 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:36:12.693 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:36:12.693 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:36:12.693 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:36:12.693 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:36:12.693 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:36:12.693 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:36:12.693 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:36:12.693 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:36:12.693 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:36:12.693 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:36:12.693 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:36:15.983 Cleaning 00:36:15.983 Removing: /var/run/dpdk/spdk0/config 00:36:15.983 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:15.983 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:15.983 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:15.983 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:15.983 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:15.983 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:15.983 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:15.983 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:15.983 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:15.983 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:15.983 Removing: /var/run/dpdk/spdk1/config 00:36:15.983 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:15.983 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:15.983 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:15.983 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:15.983 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:15.983 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:15.983 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:15.983 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:15.983 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:15.983 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:15.983 Removing: /var/run/dpdk/spdk1/mp_socket 00:36:15.983 Removing: /var/run/dpdk/spdk2/config 00:36:15.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:15.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:15.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:15.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:15.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:15.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:15.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:15.983 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:15.983 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:15.983 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:15.983 Removing: /var/run/dpdk/spdk3/config 00:36:15.983 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:15.983 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:15.983 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:15.983 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:15.983 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:15.983 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:15.983 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:15.983 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:15.983 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:15.983 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:15.983 Removing: /var/run/dpdk/spdk4/config 00:36:15.983 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:15.983 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:15.983 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:15.983 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:15.983 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:15.983 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:15.983 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:15.983 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:15.983 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:15.983 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:15.983 Removing: /dev/shm/bdev_svc_trace.1 00:36:15.983 Removing: /dev/shm/nvmf_trace.0 00:36:15.983 Removing: /dev/shm/spdk_tgt_trace.pid3266887 00:36:15.983 Removing: /var/run/dpdk/spdk0 00:36:15.983 Removing: /var/run/dpdk/spdk1 00:36:15.983 Removing: /var/run/dpdk/spdk2 00:36:15.983 Removing: /var/run/dpdk/spdk3 00:36:15.983 Removing: /var/run/dpdk/spdk4 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3264456 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3265681 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3266887 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3267625 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3269604 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3270937 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3271362 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3271694 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3272031 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3272351 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3272640 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3272920 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3273231 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3274084 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3277511 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3277823 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3278341 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3278371 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3278935 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3279201 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3279768 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3280034 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3280326 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3280412 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3280636 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3280902 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3281526 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3281788 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3282106 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3282431 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3282458 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3282526 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3282791 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3283076 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3283342 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3283630 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3283896 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3284175 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3284450 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3284729 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3285001 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3285282 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3285549 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3285838 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3286104 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3286392 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3286658 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3286937 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3287210 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3287491 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3287761 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3288045 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3288310 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3288597 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3288867 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3289155 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3289419 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3289701 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3289974 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3290255 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3290525 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3290809 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3291073 00:36:15.983 Removing: /var/run/dpdk/spdk_pid3291360 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3291630 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3291920 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3292187 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3292480 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3292760 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3293118 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3293423 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3293703 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3293901 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3294360 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3298607 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3387474 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3392052 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3401724 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3407417 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3411887 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3412460 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3421635 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3421928 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3426514 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3433060 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3436637 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3447818 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3457432 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3459296 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3460356 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3478775 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3482821 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3488161 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3490057 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3492152 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3492432 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3492702 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3492982 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3493656 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3495729 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3497127 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3497703 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3503765 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3509748 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3515052 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3554732 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3559113 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3565428 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3566938 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3568547 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3573548 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3577869 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3585713 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3585717 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3590648 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3590839 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3591103 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3591634 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3591639 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3593253 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3595104 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3596949 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3598738 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3600477 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3602320 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3608582 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3609096 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3610809 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3611739 00:36:16.243 Removing: /var/run/dpdk/spdk_pid3618282 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3621451 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3627369 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3633420 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3639652 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3640239 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3640960 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3641549 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3642396 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3643205 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3644009 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3644644 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3649159 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3649478 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3655833 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3656141 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3658595 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3667525 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3667530 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3672896 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3674908 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3677174 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3678386 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3680542 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3681880 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3691171 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3691705 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3692241 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3694737 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3695274 00:36:16.503 Removing: /var/run/dpdk/spdk_pid3695810 00:36:16.503 Clean 00:36:16.503 killing process with pid 3214713 00:36:24.630 killing process with pid 3214710 00:36:24.630 killing process with pid 3214712 00:36:24.889 killing process with pid 3214711 00:36:24.889 10:32:58 -- common/autotest_common.sh@1436 -- # return 0 00:36:24.889 10:32:58 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:36:24.889 10:32:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:36:24.889 10:32:58 -- common/autotest_common.sh@10 -- # set +x 00:36:25.149 10:32:58 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:36:25.149 10:32:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:36:25.149 10:32:58 -- common/autotest_common.sh@10 -- # set +x 00:36:25.149 10:32:58 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:25.149 10:32:58 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:25.149 10:32:58 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:25.149 10:32:58 -- spdk/autotest.sh@394 -- # hash lcov 00:36:25.149 10:32:58 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:25.149 10:32:58 -- spdk/autotest.sh@396 -- # hostname 00:36:25.149 10:32:58 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-16 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:25.149 geninfo: WARNING: invalid characters removed from testname! 00:36:51.830 10:33:24 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:55.119 10:33:28 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:57.654 10:33:30 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:00.194 10:33:33 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:02.728 10:33:36 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:06.017 10:33:38 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:08.554 10:33:41 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:08.554 10:33:41 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:08.554 10:33:41 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:08.554 10:33:41 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:08.554 10:33:41 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:08.554 10:33:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.554 10:33:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.554 10:33:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.554 10:33:41 -- paths/export.sh@5 -- $ export PATH 00:37:08.554 10:33:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.554 10:33:41 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:08.554 10:33:41 -- common/autobuild_common.sh@435 -- $ date +%s 00:37:08.554 10:33:41 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713342821.XXXXXX 00:37:08.554 10:33:41 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713342821.bUjjfW 00:37:08.554 10:33:41 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:37:08.554 10:33:41 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:37:08.554 10:33:41 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:37:08.554 10:33:41 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:08.554 10:33:41 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:08.554 10:33:41 -- common/autobuild_common.sh@451 -- $ get_config_params 00:37:08.554 10:33:41 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:37:08.554 10:33:41 -- common/autotest_common.sh@10 -- $ set +x 00:37:08.554 10:33:41 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:37:08.554 10:33:41 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:37:08.554 10:33:41 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:08.554 10:33:41 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:08.554 10:33:41 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:08.554 10:33:41 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:08.554 10:33:41 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:08.554 10:33:41 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:08.554 10:33:41 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:08.554 10:33:41 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:08.554 10:33:41 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:08.554 + [[ -n 3172257 ]] 00:37:08.554 + sudo kill 3172257 00:37:08.564 [Pipeline] } 00:37:08.583 [Pipeline] // stage 00:37:08.587 [Pipeline] } 00:37:08.604 [Pipeline] // timeout 00:37:08.609 [Pipeline] } 00:37:08.627 [Pipeline] // catchError 00:37:08.633 [Pipeline] } 00:37:08.649 [Pipeline] // wrap 00:37:08.655 [Pipeline] } 00:37:08.670 [Pipeline] // catchError 00:37:08.680 [Pipeline] stage 00:37:08.682 [Pipeline] { (Epilogue) 00:37:08.694 [Pipeline] catchError 00:37:08.695 [Pipeline] { 00:37:08.708 [Pipeline] echo 00:37:08.709 Cleanup processes 00:37:08.715 [Pipeline] sh 00:37:08.999 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:08.999 3710473 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:09.011 [Pipeline] sh 00:37:09.293 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:09.293 ++ grep -v 'sudo pgrep' 00:37:09.293 ++ awk '{print $1}' 00:37:09.293 + sudo kill -9 00:37:09.293 + true 00:37:09.304 [Pipeline] sh 00:37:09.587 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:27.687 [Pipeline] sh 00:37:27.969 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:27.969 Artifacts sizes are good 00:37:27.984 [Pipeline] archiveArtifacts 00:37:27.991 Archiving artifacts 00:37:28.235 [Pipeline] sh 00:37:28.559 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:28.835 [Pipeline] cleanWs 00:37:28.846 [WS-CLEANUP] Deleting project workspace... 00:37:28.846 [WS-CLEANUP] Deferred wipeout is used... 00:37:28.853 [WS-CLEANUP] done 00:37:28.855 [Pipeline] } 00:37:28.874 [Pipeline] // catchError 00:37:28.885 [Pipeline] sh 00:37:29.162 + logger -p user.info -t JENKINS-CI 00:37:29.171 [Pipeline] } 00:37:29.186 [Pipeline] // stage 00:37:29.191 [Pipeline] } 00:37:29.207 [Pipeline] // node 00:37:29.213 [Pipeline] End of Pipeline 00:37:29.250 Finished: SUCCESS